Oct 02 18:14:01 localhost kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct 02 18:14:01 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 02 18:14:01 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 18:14:01 localhost kernel: BIOS-provided physical RAM map:
Oct 02 18:14:01 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 02 18:14:01 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 02 18:14:01 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 02 18:14:01 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 02 18:14:01 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 02 18:14:01 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 02 18:14:01 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 02 18:14:01 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 02 18:14:01 localhost kernel: NX (Execute Disable) protection: active
Oct 02 18:14:01 localhost kernel: APIC: Static calls initialized
Oct 02 18:14:01 localhost kernel: SMBIOS 2.8 present.
Oct 02 18:14:01 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 02 18:14:01 localhost kernel: Hypervisor detected: KVM
Oct 02 18:14:01 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 02 18:14:01 localhost kernel: kvm-clock: using sched offset of 4684690831 cycles
Oct 02 18:14:01 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 02 18:14:01 localhost kernel: tsc: Detected 2800.000 MHz processor
Oct 02 18:14:01 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 02 18:14:01 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 02 18:14:01 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 02 18:14:01 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 02 18:14:01 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 02 18:14:01 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 02 18:14:01 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 02 18:14:01 localhost kernel: Using GB pages for direct mapping
Oct 02 18:14:01 localhost kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct 02 18:14:01 localhost kernel: ACPI: Early table checksum verification disabled
Oct 02 18:14:01 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 02 18:14:01 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 18:14:01 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 18:14:01 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 18:14:01 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 02 18:14:01 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 18:14:01 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 18:14:01 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 02 18:14:01 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 02 18:14:01 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 02 18:14:01 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 02 18:14:01 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 02 18:14:01 localhost kernel: No NUMA configuration found
Oct 02 18:14:01 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 02 18:14:01 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Oct 02 18:14:01 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 02 18:14:01 localhost kernel: Zone ranges:
Oct 02 18:14:01 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 02 18:14:01 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 02 18:14:01 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 02 18:14:01 localhost kernel:   Device   empty
Oct 02 18:14:01 localhost kernel: Movable zone start for each node
Oct 02 18:14:01 localhost kernel: Early memory node ranges
Oct 02 18:14:01 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 02 18:14:01 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 02 18:14:01 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 02 18:14:01 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 02 18:14:01 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 02 18:14:01 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 02 18:14:01 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 02 18:14:01 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 02 18:14:01 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 02 18:14:01 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 02 18:14:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 02 18:14:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 02 18:14:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 02 18:14:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 02 18:14:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 02 18:14:01 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 02 18:14:01 localhost kernel: TSC deadline timer available
Oct 02 18:14:01 localhost kernel: CPU topo: Max. logical packages:   8
Oct 02 18:14:01 localhost kernel: CPU topo: Max. logical dies:       8
Oct 02 18:14:01 localhost kernel: CPU topo: Max. dies per package:   1
Oct 02 18:14:01 localhost kernel: CPU topo: Max. threads per core:   1
Oct 02 18:14:01 localhost kernel: CPU topo: Num. cores per package:     1
Oct 02 18:14:01 localhost kernel: CPU topo: Num. threads per package:   1
Oct 02 18:14:01 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 02 18:14:01 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 02 18:14:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 02 18:14:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 02 18:14:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 02 18:14:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 02 18:14:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 02 18:14:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 02 18:14:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 02 18:14:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 02 18:14:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 02 18:14:01 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 02 18:14:01 localhost kernel: Booting paravirtualized kernel on KVM
Oct 02 18:14:01 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 02 18:14:01 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 02 18:14:01 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 02 18:14:01 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 02 18:14:01 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 02 18:14:01 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 02 18:14:01 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 18:14:01 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct 02 18:14:01 localhost kernel: random: crng init done
Oct 02 18:14:01 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 02 18:14:01 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 02 18:14:01 localhost kernel: Fallback order for Node 0: 0 
Oct 02 18:14:01 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 02 18:14:01 localhost kernel: Policy zone: Normal
Oct 02 18:14:01 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 02 18:14:01 localhost kernel: software IO TLB: area num 8.
Oct 02 18:14:01 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 02 18:14:01 localhost kernel: ftrace: allocating 49370 entries in 193 pages
Oct 02 18:14:01 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 02 18:14:01 localhost kernel: Dynamic Preempt: voluntary
Oct 02 18:14:01 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 02 18:14:01 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 02 18:14:01 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 02 18:14:01 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 02 18:14:01 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 02 18:14:01 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 02 18:14:01 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 02 18:14:01 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 02 18:14:01 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 18:14:01 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 18:14:01 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 18:14:01 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 02 18:14:01 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 02 18:14:01 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 02 18:14:01 localhost kernel: Console: colour VGA+ 80x25
Oct 02 18:14:01 localhost kernel: printk: console [ttyS0] enabled
Oct 02 18:14:01 localhost kernel: ACPI: Core revision 20230331
Oct 02 18:14:01 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 02 18:14:01 localhost kernel: x2apic enabled
Oct 02 18:14:01 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 02 18:14:01 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 02 18:14:01 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 02 18:14:01 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 02 18:14:01 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 02 18:14:01 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 02 18:14:01 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 02 18:14:01 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 02 18:14:01 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 02 18:14:01 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 02 18:14:01 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 02 18:14:01 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 02 18:14:01 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 02 18:14:01 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 02 18:14:01 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 02 18:14:01 localhost kernel: x86/bugs: return thunk changed
Oct 02 18:14:01 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 02 18:14:01 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 02 18:14:01 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 02 18:14:01 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 02 18:14:01 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 02 18:14:01 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 02 18:14:01 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 02 18:14:01 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 02 18:14:01 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 02 18:14:01 localhost kernel: landlock: Up and running.
Oct 02 18:14:01 localhost kernel: Yama: becoming mindful.
Oct 02 18:14:01 localhost kernel: SELinux:  Initializing.
Oct 02 18:14:01 localhost kernel: LSM support for eBPF active
Oct 02 18:14:01 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 02 18:14:01 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 02 18:14:01 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 02 18:14:01 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 02 18:14:01 localhost kernel: ... version:                0
Oct 02 18:14:01 localhost kernel: ... bit width:              48
Oct 02 18:14:01 localhost kernel: ... generic registers:      6
Oct 02 18:14:01 localhost kernel: ... value mask:             0000ffffffffffff
Oct 02 18:14:01 localhost kernel: ... max period:             00007fffffffffff
Oct 02 18:14:01 localhost kernel: ... fixed-purpose events:   0
Oct 02 18:14:01 localhost kernel: ... event mask:             000000000000003f
Oct 02 18:14:01 localhost kernel: signal: max sigframe size: 1776
Oct 02 18:14:01 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 02 18:14:01 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 02 18:14:01 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 02 18:14:01 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 02 18:14:01 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 02 18:14:01 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 02 18:14:01 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 02 18:14:01 localhost kernel: node 0 deferred pages initialised in 18ms
Oct 02 18:14:01 localhost kernel: Memory: 7765420K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616516K reserved, 0K cma-reserved)
Oct 02 18:14:01 localhost kernel: devtmpfs: initialized
Oct 02 18:14:01 localhost kernel: x86/mm: Memory block size: 128MB
Oct 02 18:14:01 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 02 18:14:01 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 02 18:14:01 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 02 18:14:01 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 02 18:14:01 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 02 18:14:01 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 02 18:14:01 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 02 18:14:01 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 02 18:14:01 localhost kernel: audit: type=2000 audit(1759428839.917:1): state=initialized audit_enabled=0 res=1
Oct 02 18:14:01 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 02 18:14:01 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 02 18:14:01 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 02 18:14:01 localhost kernel: cpuidle: using governor menu
Oct 02 18:14:01 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 02 18:14:01 localhost kernel: PCI: Using configuration type 1 for base access
Oct 02 18:14:01 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 02 18:14:01 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 02 18:14:01 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 02 18:14:01 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 02 18:14:01 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 02 18:14:01 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 02 18:14:01 localhost kernel: Demotion targets for Node 0: null
Oct 02 18:14:01 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 02 18:14:01 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 02 18:14:01 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 02 18:14:01 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 02 18:14:01 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 02 18:14:01 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 02 18:14:01 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 02 18:14:01 localhost kernel: ACPI: Interpreter enabled
Oct 02 18:14:01 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 02 18:14:01 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 02 18:14:01 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 02 18:14:01 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 02 18:14:01 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 02 18:14:01 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 02 18:14:01 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [3] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [4] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [5] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [6] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [7] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [8] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [9] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [10] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [11] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [12] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [13] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [14] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [15] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [16] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [17] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [18] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [19] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [20] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [21] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [22] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [23] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [24] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [25] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [26] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [27] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [28] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [29] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [30] registered
Oct 02 18:14:01 localhost kernel: acpiphp: Slot [31] registered
Oct 02 18:14:01 localhost kernel: PCI host bridge to bus 0000:00
Oct 02 18:14:01 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 02 18:14:01 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 02 18:14:01 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 02 18:14:01 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 02 18:14:01 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 02 18:14:01 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 02 18:14:01 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 02 18:14:01 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 02 18:14:01 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 02 18:14:01 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 02 18:14:01 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 02 18:14:01 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 02 18:14:01 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 02 18:14:01 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 02 18:14:01 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 02 18:14:01 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 02 18:14:01 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 02 18:14:01 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 02 18:14:01 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 02 18:14:01 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 02 18:14:01 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 02 18:14:01 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 02 18:14:01 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 02 18:14:01 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 02 18:14:01 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 02 18:14:01 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 02 18:14:01 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 02 18:14:01 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 02 18:14:01 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 02 18:14:01 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 02 18:14:01 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 02 18:14:01 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 02 18:14:01 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 02 18:14:01 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 02 18:14:01 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 02 18:14:01 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 02 18:14:01 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 02 18:14:01 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 02 18:14:01 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 02 18:14:01 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 02 18:14:01 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 02 18:14:01 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 02 18:14:01 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 02 18:14:01 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 02 18:14:01 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 02 18:14:01 localhost kernel: iommu: Default domain type: Translated
Oct 02 18:14:01 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 02 18:14:01 localhost kernel: SCSI subsystem initialized
Oct 02 18:14:01 localhost kernel: ACPI: bus type USB registered
Oct 02 18:14:01 localhost kernel: usbcore: registered new interface driver usbfs
Oct 02 18:14:01 localhost kernel: usbcore: registered new interface driver hub
Oct 02 18:14:01 localhost kernel: usbcore: registered new device driver usb
Oct 02 18:14:01 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 02 18:14:01 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 02 18:14:01 localhost kernel: PTP clock support registered
Oct 02 18:14:01 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 02 18:14:01 localhost kernel: NetLabel: Initializing
Oct 02 18:14:01 localhost kernel: NetLabel:  domain hash size = 128
Oct 02 18:14:01 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 02 18:14:01 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 02 18:14:01 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 02 18:14:01 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 02 18:14:01 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 02 18:14:01 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 02 18:14:01 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 02 18:14:01 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 02 18:14:01 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 02 18:14:01 localhost kernel: vgaarb: loaded
Oct 02 18:14:01 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 02 18:14:01 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 02 18:14:01 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 02 18:14:01 localhost kernel: pnp: PnP ACPI init
Oct 02 18:14:01 localhost kernel: pnp 00:03: [dma 2]
Oct 02 18:14:01 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 02 18:14:01 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 02 18:14:01 localhost kernel: NET: Registered PF_INET protocol family
Oct 02 18:14:01 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 02 18:14:01 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 02 18:14:01 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 02 18:14:01 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 02 18:14:01 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 02 18:14:01 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 02 18:14:01 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 02 18:14:01 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 02 18:14:01 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 02 18:14:01 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 02 18:14:01 localhost kernel: NET: Registered PF_XDP protocol family
Oct 02 18:14:01 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 02 18:14:01 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 02 18:14:01 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 02 18:14:01 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 02 18:14:01 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 02 18:14:01 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 02 18:14:01 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 02 18:14:01 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 02 18:14:01 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 86722 usecs
Oct 02 18:14:01 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 02 18:14:01 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 02 18:14:01 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 02 18:14:01 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 02 18:14:01 localhost kernel: ACPI: bus type thunderbolt registered
Oct 02 18:14:01 localhost kernel: Initialise system trusted keyrings
Oct 02 18:14:01 localhost kernel: Key type blacklist registered
Oct 02 18:14:01 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 02 18:14:01 localhost kernel: zbud: loaded
Oct 02 18:14:01 localhost kernel: integrity: Platform Keyring initialized
Oct 02 18:14:01 localhost kernel: integrity: Machine keyring initialized
Oct 02 18:14:01 localhost kernel: Freeing initrd memory: 86104K
Oct 02 18:14:01 localhost kernel: NET: Registered PF_ALG protocol family
Oct 02 18:14:01 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 02 18:14:01 localhost kernel: Key type asymmetric registered
Oct 02 18:14:01 localhost kernel: Asymmetric key parser 'x509' registered
Oct 02 18:14:01 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 02 18:14:01 localhost kernel: io scheduler mq-deadline registered
Oct 02 18:14:01 localhost kernel: io scheduler kyber registered
Oct 02 18:14:01 localhost kernel: io scheduler bfq registered
Oct 02 18:14:01 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 02 18:14:01 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 02 18:14:01 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 02 18:14:01 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 02 18:14:01 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 02 18:14:01 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 02 18:14:01 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 02 18:14:01 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 02 18:14:01 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 02 18:14:01 localhost kernel: Non-volatile memory driver v1.3
Oct 02 18:14:01 localhost kernel: rdac: device handler registered
Oct 02 18:14:01 localhost kernel: hp_sw: device handler registered
Oct 02 18:14:01 localhost kernel: emc: device handler registered
Oct 02 18:14:01 localhost kernel: alua: device handler registered
Oct 02 18:14:01 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 02 18:14:01 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 02 18:14:01 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 02 18:14:01 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 02 18:14:01 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 02 18:14:01 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 02 18:14:01 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 02 18:14:01 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct 02 18:14:01 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 02 18:14:01 localhost kernel: hub 1-0:1.0: USB hub found
Oct 02 18:14:01 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 02 18:14:01 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 02 18:14:01 localhost kernel: usbserial: USB Serial support registered for generic
Oct 02 18:14:01 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 02 18:14:01 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 02 18:14:01 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 02 18:14:01 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 02 18:14:01 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 02 18:14:01 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 02 18:14:01 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 02 18:14:01 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-02T18:14:00 UTC (1759428840)
Oct 02 18:14:01 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 02 18:14:01 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 02 18:14:01 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 02 18:14:01 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 02 18:14:01 localhost kernel: usbcore: registered new interface driver usbhid
Oct 02 18:14:01 localhost kernel: usbhid: USB HID core driver
Oct 02 18:14:01 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 02 18:14:01 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 02 18:14:01 localhost kernel: Initializing XFRM netlink socket
Oct 02 18:14:01 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 02 18:14:01 localhost kernel: Segment Routing with IPv6
Oct 02 18:14:01 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 02 18:14:01 localhost kernel: mpls_gso: MPLS GSO support
Oct 02 18:14:01 localhost kernel: IPI shorthand broadcast: enabled
Oct 02 18:14:01 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 02 18:14:01 localhost kernel: AES CTR mode by8 optimization enabled
Oct 02 18:14:01 localhost kernel: sched_clock: Marking stable (1220003700, 152517440)->(1447697630, -75176490)
Oct 02 18:14:01 localhost kernel: registered taskstats version 1
Oct 02 18:14:01 localhost kernel: Loading compiled-in X.509 certificates
Oct 02 18:14:01 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 02 18:14:01 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 02 18:14:01 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 02 18:14:01 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 02 18:14:01 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 02 18:14:01 localhost kernel: Demotion targets for Node 0: null
Oct 02 18:14:01 localhost kernel: page_owner is disabled
Oct 02 18:14:01 localhost kernel: Key type .fscrypt registered
Oct 02 18:14:01 localhost kernel: Key type fscrypt-provisioning registered
Oct 02 18:14:01 localhost kernel: Key type big_key registered
Oct 02 18:14:01 localhost kernel: Key type encrypted registered
Oct 02 18:14:01 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 02 18:14:01 localhost kernel: Loading compiled-in module X.509 certificates
Oct 02 18:14:01 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 02 18:14:01 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 02 18:14:01 localhost kernel: ima: No architecture policies found
Oct 02 18:14:01 localhost kernel: evm: Initialising EVM extended attributes:
Oct 02 18:14:01 localhost kernel: evm: security.selinux
Oct 02 18:14:01 localhost kernel: evm: security.SMACK64 (disabled)
Oct 02 18:14:01 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 02 18:14:01 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 02 18:14:01 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 02 18:14:01 localhost kernel: evm: security.apparmor (disabled)
Oct 02 18:14:01 localhost kernel: evm: security.ima
Oct 02 18:14:01 localhost kernel: evm: security.capability
Oct 02 18:14:01 localhost kernel: evm: HMAC attrs: 0x1
Oct 02 18:14:01 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 02 18:14:01 localhost kernel: Running certificate verification RSA selftest
Oct 02 18:14:01 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 02 18:14:01 localhost kernel: Running certificate verification ECDSA selftest
Oct 02 18:14:01 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 02 18:14:01 localhost kernel: clk: Disabling unused clocks
Oct 02 18:14:01 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 02 18:14:01 localhost kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct 02 18:14:01 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 02 18:14:01 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct 02 18:14:01 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 02 18:14:01 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 02 18:14:01 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 02 18:14:01 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 02 18:14:01 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 02 18:14:01 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 02 18:14:01 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 02 18:14:01 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 02 18:14:01 localhost kernel: Run /init as init process
Oct 02 18:14:01 localhost kernel:   with arguments:
Oct 02 18:14:01 localhost kernel:     /init
Oct 02 18:14:01 localhost kernel:   with environment:
Oct 02 18:14:01 localhost kernel:     HOME=/
Oct 02 18:14:01 localhost kernel:     TERM=linux
Oct 02 18:14:01 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64
Oct 02 18:14:01 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 02 18:14:01 localhost systemd[1]: Detected virtualization kvm.
Oct 02 18:14:01 localhost systemd[1]: Detected architecture x86-64.
Oct 02 18:14:01 localhost systemd[1]: Running in initrd.
Oct 02 18:14:01 localhost systemd[1]: No hostname configured, using default hostname.
Oct 02 18:14:01 localhost systemd[1]: Hostname set to <localhost>.
Oct 02 18:14:01 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 02 18:14:01 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 02 18:14:01 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 02 18:14:01 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 02 18:14:01 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 02 18:14:01 localhost systemd[1]: Reached target Local File Systems.
Oct 02 18:14:01 localhost systemd[1]: Reached target Path Units.
Oct 02 18:14:01 localhost systemd[1]: Reached target Slice Units.
Oct 02 18:14:01 localhost systemd[1]: Reached target Swaps.
Oct 02 18:14:01 localhost systemd[1]: Reached target Timer Units.
Oct 02 18:14:01 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 02 18:14:01 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 02 18:14:01 localhost systemd[1]: Listening on Journal Socket.
Oct 02 18:14:01 localhost systemd[1]: Listening on udev Control Socket.
Oct 02 18:14:01 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 02 18:14:01 localhost systemd[1]: Reached target Socket Units.
Oct 02 18:14:01 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 02 18:14:01 localhost systemd[1]: Starting Journal Service...
Oct 02 18:14:01 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 02 18:14:01 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 02 18:14:01 localhost systemd[1]: Starting Create System Users...
Oct 02 18:14:01 localhost systemd[1]: Starting Setup Virtual Console...
Oct 02 18:14:01 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 02 18:14:01 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 02 18:14:01 localhost systemd[1]: Finished Create System Users.
Oct 02 18:14:01 localhost systemd-journald[309]: Journal started
Oct 02 18:14:01 localhost systemd-journald[309]: Runtime Journal (/run/log/journal/f951c71cb20747a89e733e13df1d111a) is 8.0M, max 153.5M, 145.5M free.
Oct 02 18:14:01 localhost systemd-sysusers[313]: Creating group 'users' with GID 100.
Oct 02 18:14:01 localhost systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Oct 02 18:14:01 localhost systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 02 18:14:01 localhost systemd[1]: Started Journal Service.
Oct 02 18:14:01 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 02 18:14:01 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 02 18:14:01 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 02 18:14:01 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 02 18:14:01 localhost systemd[1]: Finished Setup Virtual Console.
Oct 02 18:14:01 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 02 18:14:01 localhost systemd[1]: Starting dracut cmdline hook...
Oct 02 18:14:01 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Oct 02 18:14:01 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 18:14:01 localhost systemd[1]: Finished dracut cmdline hook.
Oct 02 18:14:01 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 02 18:14:01 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 02 18:14:01 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 02 18:14:01 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 02 18:14:01 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 02 18:14:01 localhost kernel: RPC: Registered udp transport module.
Oct 02 18:14:01 localhost kernel: RPC: Registered tcp transport module.
Oct 02 18:14:01 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 02 18:14:01 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 02 18:14:01 localhost rpc.statd[443]: Version 2.5.4 starting
Oct 02 18:14:01 localhost rpc.statd[443]: Initializing NSM state
Oct 02 18:14:01 localhost rpc.idmapd[448]: Setting log level to 0
Oct 02 18:14:01 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 02 18:14:01 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 02 18:14:02 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Oct 02 18:14:02 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 02 18:14:02 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 02 18:14:02 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 02 18:14:02 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 02 18:14:02 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 02 18:14:02 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 02 18:14:02 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 02 18:14:02 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 02 18:14:02 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 02 18:14:02 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 02 18:14:02 localhost systemd[1]: Reached target Network.
Oct 02 18:14:02 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 02 18:14:02 localhost systemd[1]: Starting dracut initqueue hook...
Oct 02 18:14:02 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 02 18:14:02 localhost kernel: libata version 3.00 loaded.
Oct 02 18:14:02 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 02 18:14:02 localhost kernel: scsi host0: ata_piix
Oct 02 18:14:02 localhost kernel: scsi host1: ata_piix
Oct 02 18:14:02 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 02 18:14:02 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 02 18:14:02 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 02 18:14:02 localhost kernel:  vda: vda1
Oct 02 18:14:02 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 02 18:14:02 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 02 18:14:02 localhost systemd[1]: Reached target System Initialization.
Oct 02 18:14:02 localhost systemd[1]: Reached target Basic System.
Oct 02 18:14:02 localhost kernel: ata1: found unknown device (class 0)
Oct 02 18:14:02 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 02 18:14:02 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 02 18:14:02 localhost systemd-udevd[492]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 18:14:02 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 02 18:14:02 localhost systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 02 18:14:02 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 02 18:14:02 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 02 18:14:02 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 02 18:14:02 localhost systemd[1]: Reached target Initrd Root Device.
Oct 02 18:14:02 localhost systemd[1]: Finished dracut initqueue hook.
Oct 02 18:14:02 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 02 18:14:02 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 02 18:14:02 localhost systemd[1]: Reached target Remote File Systems.
Oct 02 18:14:02 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 02 18:14:02 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 02 18:14:02 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct 02 18:14:02 localhost systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Oct 02 18:14:02 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 02 18:14:02 localhost systemd[1]: Mounting /sysroot...
Oct 02 18:14:03 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 02 18:14:03 localhost kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct 02 18:14:03 localhost kernel: XFS (vda1): Ending clean mount
Oct 02 18:14:03 localhost systemd[1]: Mounted /sysroot.
Oct 02 18:14:03 localhost systemd[1]: Reached target Initrd Root File System.
Oct 02 18:14:03 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 02 18:14:03 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 02 18:14:03 localhost systemd[1]: Reached target Initrd File Systems.
Oct 02 18:14:03 localhost systemd[1]: Reached target Initrd Default Target.
Oct 02 18:14:03 localhost systemd[1]: Starting dracut mount hook...
Oct 02 18:14:03 localhost systemd[1]: Finished dracut mount hook.
Oct 02 18:14:03 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 02 18:14:03 localhost rpc.idmapd[448]: exiting on signal 15
Oct 02 18:14:03 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 02 18:14:03 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 02 18:14:03 localhost systemd[1]: Stopped target Network.
Oct 02 18:14:03 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 02 18:14:03 localhost systemd[1]: Stopped target Timer Units.
Oct 02 18:14:03 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 02 18:14:03 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 02 18:14:03 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 02 18:14:03 localhost systemd[1]: Stopped target Basic System.
Oct 02 18:14:03 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 02 18:14:03 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 02 18:14:03 localhost systemd[1]: Stopped target Path Units.
Oct 02 18:14:03 localhost systemd[1]: Stopped target Remote File Systems.
Oct 02 18:14:03 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 02 18:14:03 localhost systemd[1]: Stopped target Slice Units.
Oct 02 18:14:03 localhost systemd[1]: Stopped target Socket Units.
Oct 02 18:14:03 localhost systemd[1]: Stopped target System Initialization.
Oct 02 18:14:03 localhost systemd[1]: Stopped target Local File Systems.
Oct 02 18:14:03 localhost systemd[1]: Stopped target Swaps.
Oct 02 18:14:03 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped dracut mount hook.
Oct 02 18:14:03 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 02 18:14:03 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 02 18:14:03 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 02 18:14:03 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 02 18:14:03 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 02 18:14:03 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 02 18:14:03 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 02 18:14:03 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 02 18:14:03 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 02 18:14:03 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 02 18:14:03 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 02 18:14:03 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 02 18:14:03 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Closed udev Control Socket.
Oct 02 18:14:03 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Closed udev Kernel Socket.
Oct 02 18:14:03 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 02 18:14:03 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 02 18:14:03 localhost systemd[1]: Starting Cleanup udev Database...
Oct 02 18:14:03 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 02 18:14:03 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 02 18:14:03 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Stopped Create System Users.
Oct 02 18:14:03 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 02 18:14:03 localhost systemd[1]: Finished Cleanup udev Database.
Oct 02 18:14:03 localhost systemd[1]: Reached target Switch Root.
Oct 02 18:14:03 localhost systemd[1]: Starting Switch Root...
Oct 02 18:14:03 localhost systemd[1]: Switching root.
Oct 02 18:14:03 localhost systemd-journald[309]: Journal stopped
Oct 02 18:14:04 localhost systemd-journald[309]: Received SIGTERM from PID 1 (systemd).
Oct 02 18:14:04 localhost kernel: audit: type=1404 audit(1759428843.919:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 02 18:14:04 localhost kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:14:04 localhost kernel: SELinux:  policy capability open_perms=1
Oct 02 18:14:04 localhost kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:14:04 localhost kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:14:04 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:14:04 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:14:04 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:14:04 localhost kernel: audit: type=1403 audit(1759428844.080:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 02 18:14:04 localhost systemd[1]: Successfully loaded SELinux policy in 165.142ms.
Oct 02 18:14:04 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.439ms.
Oct 02 18:14:04 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 02 18:14:04 localhost systemd[1]: Detected virtualization kvm.
Oct 02 18:14:04 localhost systemd[1]: Detected architecture x86-64.
Oct 02 18:14:04 localhost systemd-rc-local-generator[640]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:14:04 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 02 18:14:04 localhost systemd[1]: Stopped Switch Root.
Oct 02 18:14:04 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 02 18:14:04 localhost systemd[1]: Created slice Slice /system/getty.
Oct 02 18:14:04 localhost systemd[1]: Created slice Slice /system/serial-getty.
Oct 02 18:14:04 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 02 18:14:04 localhost systemd[1]: Created slice User and Session Slice.
Oct 02 18:14:04 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 02 18:14:04 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 02 18:14:04 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 02 18:14:04 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 02 18:14:04 localhost systemd[1]: Stopped target Switch Root.
Oct 02 18:14:04 localhost systemd[1]: Stopped target Initrd File Systems.
Oct 02 18:14:04 localhost systemd[1]: Stopped target Initrd Root File System.
Oct 02 18:14:04 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 02 18:14:04 localhost systemd[1]: Reached target Path Units.
Oct 02 18:14:04 localhost systemd[1]: Reached target rpc_pipefs.target.
Oct 02 18:14:04 localhost systemd[1]: Reached target Slice Units.
Oct 02 18:14:04 localhost systemd[1]: Reached target Swaps.
Oct 02 18:14:04 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Oct 02 18:14:04 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 02 18:14:04 localhost systemd[1]: Reached target RPC Port Mapper.
Oct 02 18:14:04 localhost systemd[1]: Listening on Process Core Dump Socket.
Oct 02 18:14:04 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 02 18:14:04 localhost systemd[1]: Listening on udev Control Socket.
Oct 02 18:14:04 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 02 18:14:04 localhost systemd[1]: Mounting Huge Pages File System...
Oct 02 18:14:04 localhost systemd[1]: Mounting POSIX Message Queue File System...
Oct 02 18:14:04 localhost systemd[1]: Mounting Kernel Debug File System...
Oct 02 18:14:04 localhost systemd[1]: Mounting Kernel Trace File System...
Oct 02 18:14:04 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 02 18:14:04 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 02 18:14:04 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 02 18:14:04 localhost systemd[1]: Starting Load Kernel Module drm...
Oct 02 18:14:04 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 02 18:14:04 localhost systemd[1]: Starting Load Kernel Module fuse...
Oct 02 18:14:04 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 02 18:14:04 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 02 18:14:04 localhost systemd[1]: Stopped File System Check on Root Device.
Oct 02 18:14:04 localhost systemd[1]: Stopped Journal Service.
Oct 02 18:14:04 localhost systemd[1]: Starting Journal Service...
Oct 02 18:14:04 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 02 18:14:04 localhost systemd[1]: Starting Generate network units from Kernel command line...
Oct 02 18:14:04 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 02 18:14:04 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 02 18:14:04 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 02 18:14:04 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 02 18:14:04 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 02 18:14:04 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 02 18:14:04 localhost systemd-journald[681]: Journal started
Oct 02 18:14:04 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 02 18:14:04 localhost systemd[1]: Queued start job for default target Multi-User System.
Oct 02 18:14:04 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 02 18:14:04 localhost systemd[1]: Started Journal Service.
Oct 02 18:14:04 localhost systemd[1]: Mounted Huge Pages File System.
Oct 02 18:14:04 localhost systemd[1]: Mounted POSIX Message Queue File System.
Oct 02 18:14:04 localhost kernel: ACPI: bus type drm_connector registered
Oct 02 18:14:04 localhost systemd[1]: Mounted Kernel Debug File System.
Oct 02 18:14:04 localhost systemd[1]: Mounted Kernel Trace File System.
Oct 02 18:14:04 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 02 18:14:04 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 02 18:14:04 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 02 18:14:04 localhost kernel: fuse: init (API version 7.37)
Oct 02 18:14:04 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 02 18:14:04 localhost systemd[1]: Finished Load Kernel Module drm.
Oct 02 18:14:04 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 02 18:14:04 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 02 18:14:04 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 02 18:14:04 localhost systemd[1]: Finished Load Kernel Module fuse.
Oct 02 18:14:04 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 02 18:14:04 localhost systemd[1]: Finished Generate network units from Kernel command line.
Oct 02 18:14:04 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 02 18:14:04 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 02 18:14:04 localhost systemd[1]: Mounting FUSE Control File System...
Oct 02 18:14:04 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 02 18:14:04 localhost systemd[1]: Starting Rebuild Hardware Database...
Oct 02 18:14:04 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 02 18:14:04 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 02 18:14:04 localhost systemd[1]: Starting Load/Save OS Random Seed...
Oct 02 18:14:04 localhost systemd[1]: Starting Create System Users...
Oct 02 18:14:04 localhost systemd[1]: Mounted FUSE Control File System.
Oct 02 18:14:04 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 02 18:14:04 localhost systemd-journald[681]: Received client request to flush runtime journal.
Oct 02 18:14:04 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 02 18:14:04 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 02 18:14:04 localhost systemd[1]: Finished Load/Save OS Random Seed.
Oct 02 18:14:04 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 02 18:14:04 localhost systemd[1]: Finished Create System Users.
Oct 02 18:14:04 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 02 18:14:04 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 02 18:14:04 localhost systemd[1]: Reached target Preparation for Local File Systems.
Oct 02 18:14:04 localhost systemd[1]: Reached target Local File Systems.
Oct 02 18:14:04 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 02 18:14:04 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 02 18:14:04 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 02 18:14:04 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 02 18:14:04 localhost systemd[1]: Starting Automatic Boot Loader Update...
Oct 02 18:14:04 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 02 18:14:04 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 02 18:14:04 localhost bootctl[701]: Couldn't find EFI system partition, skipping.
Oct 02 18:14:04 localhost systemd[1]: Finished Automatic Boot Loader Update.
Oct 02 18:14:05 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 02 18:14:05 localhost systemd[1]: Starting Security Auditing Service...
Oct 02 18:14:05 localhost systemd[1]: Starting RPC Bind...
Oct 02 18:14:05 localhost systemd[1]: Starting Rebuild Journal Catalog...
Oct 02 18:14:05 localhost auditd[707]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 02 18:14:05 localhost auditd[707]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 02 18:14:05 localhost systemd[1]: Finished Rebuild Journal Catalog.
Oct 02 18:14:05 localhost systemd[1]: Started RPC Bind.
Oct 02 18:14:05 localhost augenrules[712]: /sbin/augenrules: No change
Oct 02 18:14:05 localhost augenrules[727]: No rules
Oct 02 18:14:05 localhost augenrules[727]: enabled 1
Oct 02 18:14:05 localhost augenrules[727]: failure 1
Oct 02 18:14:05 localhost augenrules[727]: pid 707
Oct 02 18:14:05 localhost augenrules[727]: rate_limit 0
Oct 02 18:14:05 localhost augenrules[727]: backlog_limit 8192
Oct 02 18:14:05 localhost augenrules[727]: lost 0
Oct 02 18:14:05 localhost augenrules[727]: backlog 0
Oct 02 18:14:05 localhost augenrules[727]: backlog_wait_time 60000
Oct 02 18:14:05 localhost augenrules[727]: backlog_wait_time_actual 0
Oct 02 18:14:05 localhost augenrules[727]: enabled 1
Oct 02 18:14:05 localhost augenrules[727]: failure 1
Oct 02 18:14:05 localhost augenrules[727]: pid 707
Oct 02 18:14:05 localhost augenrules[727]: rate_limit 0
Oct 02 18:14:05 localhost augenrules[727]: backlog_limit 8192
Oct 02 18:14:05 localhost augenrules[727]: lost 0
Oct 02 18:14:05 localhost augenrules[727]: backlog 0
Oct 02 18:14:05 localhost augenrules[727]: backlog_wait_time 60000
Oct 02 18:14:05 localhost augenrules[727]: backlog_wait_time_actual 0
Oct 02 18:14:05 localhost augenrules[727]: enabled 1
Oct 02 18:14:05 localhost augenrules[727]: failure 1
Oct 02 18:14:05 localhost augenrules[727]: pid 707
Oct 02 18:14:05 localhost augenrules[727]: rate_limit 0
Oct 02 18:14:05 localhost augenrules[727]: backlog_limit 8192
Oct 02 18:14:05 localhost augenrules[727]: lost 0
Oct 02 18:14:05 localhost augenrules[727]: backlog 0
Oct 02 18:14:05 localhost augenrules[727]: backlog_wait_time 60000
Oct 02 18:14:05 localhost augenrules[727]: backlog_wait_time_actual 0
Oct 02 18:14:05 localhost systemd[1]: Started Security Auditing Service.
Oct 02 18:14:05 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 02 18:14:05 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 02 18:14:05 localhost systemd[1]: Finished Rebuild Hardware Database.
Oct 02 18:14:05 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 02 18:14:05 localhost systemd-udevd[735]: Using default interface naming scheme 'rhel-9.0'.
Oct 02 18:14:05 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 02 18:14:05 localhost systemd[1]: Starting Update is Completed...
Oct 02 18:14:05 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 02 18:14:05 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 02 18:14:05 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 02 18:14:05 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 02 18:14:05 localhost systemd[1]: Finished Update is Completed.
Oct 02 18:14:05 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 02 18:14:05 localhost systemd[1]: Reached target System Initialization.
Oct 02 18:14:05 localhost systemd[1]: Started dnf makecache --timer.
Oct 02 18:14:05 localhost systemd[1]: Started Daily rotation of log files.
Oct 02 18:14:05 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 02 18:14:05 localhost systemd-udevd[742]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 18:14:05 localhost systemd[1]: Reached target Timer Units.
Oct 02 18:14:05 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 02 18:14:05 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 02 18:14:05 localhost systemd[1]: Reached target Socket Units.
Oct 02 18:14:05 localhost systemd[1]: Starting D-Bus System Message Bus...
Oct 02 18:14:05 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 02 18:14:05 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 02 18:14:05 localhost systemd[1]: Started D-Bus System Message Bus.
Oct 02 18:14:05 localhost systemd[1]: Reached target Basic System.
Oct 02 18:14:05 localhost dbus-broker-lau[768]: Ready
Oct 02 18:14:05 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 02 18:14:05 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 02 18:14:05 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 02 18:14:05 localhost systemd[1]: Starting NTP client/server...
Oct 02 18:14:05 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 02 18:14:05 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 02 18:14:05 localhost systemd[1]: Starting IPv4 firewall with iptables...
Oct 02 18:14:05 localhost systemd[1]: Started irqbalance daemon.
Oct 02 18:14:05 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 02 18:14:05 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 18:14:05 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 18:14:05 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 18:14:05 localhost systemd[1]: Reached target sshd-keygen.target.
Oct 02 18:14:05 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 02 18:14:05 localhost systemd[1]: Reached target User and Group Name Lookups.
Oct 02 18:14:05 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 02 18:14:05 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 02 18:14:05 localhost kernel: Console: switching to colour dummy device 80x25
Oct 02 18:14:05 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 02 18:14:05 localhost kernel: [drm] features: -context_init
Oct 02 18:14:05 localhost kernel: [drm] number of scanouts: 1
Oct 02 18:14:05 localhost kernel: [drm] number of cap sets: 0
Oct 02 18:14:05 localhost systemd[1]: Starting User Login Management...
Oct 02 18:14:05 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 02 18:14:05 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 02 18:14:05 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 02 18:14:05 localhost kernel: Console: switching to colour frame buffer device 128x48
Oct 02 18:14:05 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 02 18:14:05 localhost chronyd[810]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 02 18:14:05 localhost chronyd[810]: Loaded 0 symmetric keys
Oct 02 18:14:05 localhost chronyd[810]: Using right/UTC timezone to obtain leap second data
Oct 02 18:14:05 localhost chronyd[810]: Loaded seccomp filter (level 2)
Oct 02 18:14:05 localhost systemd[1]: Started NTP client/server.
Oct 02 18:14:05 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 02 18:14:05 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 02 18:14:05 localhost systemd-logind[798]: New seat seat0.
Oct 02 18:14:05 localhost systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 02 18:14:05 localhost systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 02 18:14:05 localhost systemd[1]: Started User Login Management.
Oct 02 18:14:05 localhost kernel: kvm_amd: TSC scaling supported
Oct 02 18:14:05 localhost kernel: kvm_amd: Nested Virtualization enabled
Oct 02 18:14:05 localhost kernel: kvm_amd: Nested Paging enabled
Oct 02 18:14:05 localhost kernel: kvm_amd: LBR virtualization supported
Oct 02 18:14:05 localhost iptables.init[788]: iptables: Applying firewall rules: [  OK  ]
Oct 02 18:14:05 localhost systemd[1]: Finished IPv4 firewall with iptables.
Oct 02 18:14:06 localhost cloud-init[843]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 02 Oct 2025 18:14:06 +0000. Up 7.04 seconds.
Oct 02 18:14:06 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Oct 02 18:14:06 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Oct 02 18:14:06 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpmakpea3d.mount: Deactivated successfully.
Oct 02 18:14:06 localhost systemd[1]: Starting Hostname Service...
Oct 02 18:14:06 localhost systemd[1]: Started Hostname Service.
Oct 02 18:14:06 np0005467075.novalocal systemd-hostnamed[857]: Hostname set to <np0005467075.novalocal> (static)
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Reached target Preparation for Network.
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Starting Network Manager...
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1132] NetworkManager (version 1.54.1-1.el9) is starting... (boot:cafe0c2a-2d4b-4517-8a8b-b22a7ae0a086)
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1137] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1289] manager[0x55981569b080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1356] hostname: hostname: using hostnamed
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1357] hostname: static hostname changed from (none) to "np0005467075.novalocal"
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1362] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1478] manager[0x55981569b080]: rfkill: Wi-Fi hardware radio set enabled
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1478] manager[0x55981569b080]: rfkill: WWAN hardware radio set enabled
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1576] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1577] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1577] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1578] manager: Networking is enabled by state file
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1580] settings: Loaded settings plugin: keyfile (internal)
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1623] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1652] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1692] dhcp: init: Using DHCP client 'internal'
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1695] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1710] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1728] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1737] device (lo): Activation: starting connection 'lo' (aeebfd8a-b15e-4738-a7c9-24998c83f095)
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1763] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1767] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1802] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Started Network Manager.
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1819] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1822] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1824] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1826] device (eth0): carrier: link connected
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1829] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1836] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Reached target Network.
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1856] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1861] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1862] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1866] manager: NetworkManager state is now CONNECTING
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1867] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1874] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1877] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1909] dhcp4 (eth0): state changed new lease, address=38.102.83.147
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1917] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1937] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1996] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.1998] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.2004] device (lo): Activation: successful, device activated.
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.2020] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.2022] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.2027] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.2030] device (eth0): Activation: successful, device activated.
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.2036] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 02 18:14:07 np0005467075.novalocal NetworkManager[861]: <info>  [1759428847.2038] manager: startup complete
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Reached target NFS client services.
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: Reached target Remote File Systems.
Oct 02 18:14:07 np0005467075.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 02 Oct 2025 18:14:07 +0000. Up 8.25 seconds.
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: |  eth0  | True |        38.102.83.147         | 255.255.255.0 | global | fa:16:3e:74:a1:aa |
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fe74:a1aa/64 |       .       |  link  | fa:16:3e:74:a1:aa |
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 02 18:14:07 np0005467075.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 02 18:14:08 np0005467075.novalocal useradd[991]: new group: name=cloud-user, GID=1001
Oct 02 18:14:08 np0005467075.novalocal useradd[991]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Oct 02 18:14:08 np0005467075.novalocal useradd[991]: add 'cloud-user' to group 'adm'
Oct 02 18:14:08 np0005467075.novalocal useradd[991]: add 'cloud-user' to group 'systemd-journal'
Oct 02 18:14:08 np0005467075.novalocal useradd[991]: add 'cloud-user' to shadow group 'adm'
Oct 02 18:14:08 np0005467075.novalocal useradd[991]: add 'cloud-user' to shadow group 'systemd-journal'
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: Generating public/private rsa key pair.
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: The key fingerprint is:
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: SHA256:nGAJdbLvXFMrHHO9Vw74o2NcOgznZsOhZSrvWUZ2FAs root@np0005467075.novalocal
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: The key's randomart image is:
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: +---[RSA 3072]----+
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |    ..o .   E .  |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |     . =     + o |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |      =   o + = .|
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |     . + o = + +.|
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |        S * O * o|
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |       o . ^ * o |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |        + o ^    |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |         o B +   |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |         .+      |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: Generating public/private ecdsa key pair.
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: The key fingerprint is:
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: SHA256:UoXmPNu7mENw/kY5gT9/E996uQUf2ocfW7qG6M/cqB0 root@np0005467075.novalocal
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: The key's randomart image is:
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: +---[ECDSA 256]---+
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |         ..      |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |        o.       |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |       +..       |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |      ..* .      |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |      .+S= o  .. |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |       .+ B   ++.|
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |       . o *Eo.=B|
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |        .o=+o+++X|
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |        o+++*o=Bo|
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: Generating public/private ed25519 key pair.
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: The key fingerprint is:
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: SHA256:fbT7rQguaaGD8k9BczEt+nQoNEEMHaU7dYa6N/4Jg/o root@np0005467075.novalocal
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: The key's randomart image is:
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: +--[ED25519 256]--+
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |    .=+++.       |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |      =..+.      |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |     .+o+oo .    |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |     .o*o+.. .   |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |      =+S.. o    |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |       =o  . .   |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |     .+.=o. .    |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |  . .ooo+= o o . |
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: |   oooEo.o+ . o..|
Oct 02 18:14:08 np0005467075.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 02 18:14:09 np0005467075.novalocal sm-notify[1006]: Version 2.5.4 starting
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Reached target Cloud-config availability.
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Reached target Network is Online.
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Starting System Logging Service...
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Starting OpenSSH server daemon...
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Starting Permit User Sessions...
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Started Notify NFS peers of a restart.
Oct 02 18:14:09 np0005467075.novalocal sshd[1008]: Server listening on 0.0.0.0 port 22.
Oct 02 18:14:09 np0005467075.novalocal sshd[1008]: Server listening on :: port 22.
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Started OpenSSH server daemon.
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Finished Permit User Sessions.
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Started Command Scheduler.
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Started Getty on tty1.
Oct 02 18:14:09 np0005467075.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Oct 02 18:14:09 np0005467075.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 02 18:14:09 np0005467075.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 27% if used.)
Oct 02 18:14:09 np0005467075.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Oct 02 18:14:09 np0005467075.novalocal rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Oct 02 18:14:09 np0005467075.novalocal rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Started Serial Getty on ttyS0.
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Reached target Login Prompts.
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Started System Logging Service.
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Reached target Multi-User System.
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 02 18:14:09 np0005467075.novalocal rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 18:14:09 np0005467075.novalocal cloud-init[1019]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 02 Oct 2025 18:14:09 +0000. Up 10.06 seconds.
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Oct 02 18:14:09 np0005467075.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Oct 02 18:14:09 np0005467075.novalocal cloud-init[1023]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 02 Oct 2025 18:14:09 +0000. Up 10.48 seconds.
Oct 02 18:14:09 np0005467075.novalocal cloud-init[1025]: #############################################################
Oct 02 18:14:09 np0005467075.novalocal cloud-init[1026]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 02 18:14:09 np0005467075.novalocal cloud-init[1028]: 256 SHA256:UoXmPNu7mENw/kY5gT9/E996uQUf2ocfW7qG6M/cqB0 root@np0005467075.novalocal (ECDSA)
Oct 02 18:14:09 np0005467075.novalocal cloud-init[1030]: 256 SHA256:fbT7rQguaaGD8k9BczEt+nQoNEEMHaU7dYa6N/4Jg/o root@np0005467075.novalocal (ED25519)
Oct 02 18:14:09 np0005467075.novalocal cloud-init[1032]: 3072 SHA256:nGAJdbLvXFMrHHO9Vw74o2NcOgznZsOhZSrvWUZ2FAs root@np0005467075.novalocal (RSA)
Oct 02 18:14:09 np0005467075.novalocal cloud-init[1033]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 02 18:14:09 np0005467075.novalocal cloud-init[1034]: #############################################################
Oct 02 18:14:10 np0005467075.novalocal cloud-init[1023]: Cloud-init v. 24.4-7.el9 finished at Thu, 02 Oct 2025 18:14:10 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.72 seconds
Oct 02 18:14:10 np0005467075.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Oct 02 18:14:10 np0005467075.novalocal systemd[1]: Reached target Cloud-init target.
Oct 02 18:14:10 np0005467075.novalocal systemd[1]: Startup finished in 1.653s (kernel) + 2.931s (initrd) + 6.215s (userspace) = 10.799s.
Oct 02 18:14:11 np0005467075.novalocal sshd-session[1038]: Connection closed by 38.102.83.114 port 47644 [preauth]
Oct 02 18:14:11 np0005467075.novalocal sshd-session[1040]: Unable to negotiate with 38.102.83.114 port 47656: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Oct 02 18:14:11 np0005467075.novalocal sshd-session[1042]: Connection reset by 38.102.83.114 port 47670 [preauth]
Oct 02 18:14:11 np0005467075.novalocal sshd-session[1044]: Unable to negotiate with 38.102.83.114 port 47680: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Oct 02 18:14:11 np0005467075.novalocal sshd-session[1046]: Unable to negotiate with 38.102.83.114 port 47682: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Oct 02 18:14:11 np0005467075.novalocal sshd-session[1050]: Connection reset by 38.102.83.114 port 47698 [preauth]
Oct 02 18:14:11 np0005467075.novalocal sshd-session[1052]: Unable to negotiate with 38.102.83.114 port 47714: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Oct 02 18:14:11 np0005467075.novalocal sshd-session[1054]: Unable to negotiate with 38.102.83.114 port 47726: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Oct 02 18:14:11 np0005467075.novalocal sshd-session[1048]: Connection closed by 38.102.83.114 port 47684 [preauth]
Oct 02 18:14:12 np0005467075.novalocal chronyd[810]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Oct 02 18:14:12 np0005467075.novalocal chronyd[810]: System clock TAI offset set to 37 seconds
Oct 02 18:14:13 np0005467075.novalocal chronyd[810]: Selected source 138.197.164.54 (2.centos.pool.ntp.org)
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: Cannot change IRQ 35 affinity: Operation not permitted
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: IRQ 35 affinity is now unmanaged
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: IRQ 25 affinity is now unmanaged
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: Cannot change IRQ 33 affinity: Operation not permitted
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: IRQ 33 affinity is now unmanaged
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: IRQ 28 affinity is now unmanaged
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: IRQ 26 affinity is now unmanaged
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: Cannot change IRQ 34 affinity: Operation not permitted
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: IRQ 34 affinity is now unmanaged
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: IRQ 32 affinity is now unmanaged
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 02 18:14:16 np0005467075.novalocal irqbalance[793]: IRQ 30 affinity is now unmanaged
Oct 02 18:14:17 np0005467075.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 18:14:37 np0005467075.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 18:14:48 np0005467075.novalocal sshd-session[1058]: Connection closed by 205.210.31.99 port 49167
Oct 02 18:17:03 np0005467075.novalocal sshd-session[1059]: Accepted publickey for zuul from 38.102.83.114 port 60914 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Oct 02 18:17:03 np0005467075.novalocal systemd[1]: Created slice User Slice of UID 1000.
Oct 02 18:17:03 np0005467075.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 02 18:17:03 np0005467075.novalocal systemd-logind[798]: New session 1 of user zuul.
Oct 02 18:17:03 np0005467075.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 02 18:17:03 np0005467075.novalocal systemd[1]: Starting User Manager for UID 1000...
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: Queued start job for default target Main User Target.
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: Created slice User Application Slice.
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: Reached target Paths.
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: Reached target Timers.
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: Starting D-Bus User Message Bus Socket...
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: Starting Create User's Volatile Files and Directories...
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: Listening on D-Bus User Message Bus Socket.
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: Finished Create User's Volatile Files and Directories.
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: Reached target Sockets.
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: Reached target Basic System.
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: Reached target Main User Target.
Oct 02 18:17:03 np0005467075.novalocal systemd[1063]: Startup finished in 152ms.
Oct 02 18:17:03 np0005467075.novalocal systemd[1]: Started User Manager for UID 1000.
Oct 02 18:17:03 np0005467075.novalocal systemd[1]: Started Session 1 of User zuul.
Oct 02 18:17:03 np0005467075.novalocal sshd-session[1059]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:17:04 np0005467075.novalocal python3[1145]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:17:07 np0005467075.novalocal python3[1173]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:17:14 np0005467075.novalocal python3[1231]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:17:15 np0005467075.novalocal python3[1271]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 02 18:17:17 np0005467075.novalocal python3[1297]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDP2fsz3tOmSLqtSx68uFWLaiEXe8V5btgKusU5ixiPvqitL0zi0q7ecQHBJenZuItuVvgFJBG/4s8P2oBDSck3v68/5Ny5C4U76xYMFnBx17f0p/rc/UjNoCvVl/cnWoMJGAx30NC6u6LsWkmLg8MN0GVao3PI2H0mAaow890PyCh7JZ0PrBvXC+lZ1BMOLlOruRMrBIcSfPyAPGeoi8W0BVYkbQUpvVMmyudV+gpdI21AGKrSitNCPidY0gmM5UKzb2fLZFzJUG72CCiCabW9eeOTA7VCpQrteeCeQT03COjIahn/5a0Xwg9D6quX4TaAOlhIL/Esd2qbRRB7eJOTo+aqlsVR3ET7onRxv9cKv65DCjfTKejBY2N7BStjjfu3VkFd5RaiusBS38KgrX11phYyOa0S4TsHmZkPJE+4vDCIiFwPhuG8FtMgxQ1ggN/sygew+8K53AQNbW6J3LhpUvUt7ezX8CpgBE0GgJmX8kuSUcY6Nt+LmlPvIA8/IMM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:17 np0005467075.novalocal python3[1321]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:18 np0005467075.novalocal python3[1420]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:17:18 np0005467075.novalocal python3[1491]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759429038.1914353-207-61990663417986/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=214ed0f7db7241a78e79d80cd424c319_id_rsa follow=False checksum=599b70f18571ba831d7c926fcef605fd640ad84c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:19 np0005467075.novalocal python3[1614]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:17:19 np0005467075.novalocal python3[1685]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759429039.173061-240-126436712171565/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=214ed0f7db7241a78e79d80cd424c319_id_rsa.pub follow=False checksum=9cab1fd41fce9bab6df381728f399d854ce0d2e9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:21 np0005467075.novalocal python3[1733]: ansible-ping Invoked with data=pong
Oct 02 18:17:22 np0005467075.novalocal python3[1757]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:17:24 np0005467075.novalocal python3[1815]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 02 18:17:26 np0005467075.novalocal python3[1847]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:26 np0005467075.novalocal python3[1871]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:26 np0005467075.novalocal python3[1895]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:26 np0005467075.novalocal python3[1919]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:27 np0005467075.novalocal python3[1943]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:27 np0005467075.novalocal python3[1967]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:29 np0005467075.novalocal sudo[1991]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeoqailzdqquwcfudfssbvhbvusasqtf ; /usr/bin/python3'
Oct 02 18:17:29 np0005467075.novalocal sudo[1991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:17:29 np0005467075.novalocal python3[1993]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:29 np0005467075.novalocal sudo[1991]: pam_unix(sudo:session): session closed for user root
Oct 02 18:17:29 np0005467075.novalocal sudo[2069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpypiiybmywwxvlxccwtdefrtgqtkfrg ; /usr/bin/python3'
Oct 02 18:17:29 np0005467075.novalocal sudo[2069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:17:29 np0005467075.novalocal python3[2071]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:17:29 np0005467075.novalocal sudo[2069]: pam_unix(sudo:session): session closed for user root
Oct 02 18:17:30 np0005467075.novalocal sudo[2142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efpmvndditjogkrrewmluiuzfzwepxwp ; /usr/bin/python3'
Oct 02 18:17:30 np0005467075.novalocal sudo[2142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:17:30 np0005467075.novalocal python3[2144]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759429049.4123216-21-81842313340076/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:30 np0005467075.novalocal sudo[2142]: pam_unix(sudo:session): session closed for user root
Oct 02 18:17:31 np0005467075.novalocal python3[2192]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:31 np0005467075.novalocal python3[2216]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:31 np0005467075.novalocal python3[2240]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:31 np0005467075.novalocal python3[2264]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:32 np0005467075.novalocal python3[2288]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:32 np0005467075.novalocal python3[2312]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:32 np0005467075.novalocal python3[2336]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:32 np0005467075.novalocal python3[2360]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:33 np0005467075.novalocal python3[2384]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:33 np0005467075.novalocal python3[2408]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:33 np0005467075.novalocal python3[2432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:34 np0005467075.novalocal python3[2456]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:34 np0005467075.novalocal python3[2480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:34 np0005467075.novalocal python3[2504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:34 np0005467075.novalocal python3[2528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:35 np0005467075.novalocal python3[2552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:35 np0005467075.novalocal python3[2576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:35 np0005467075.novalocal python3[2600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:36 np0005467075.novalocal python3[2624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:36 np0005467075.novalocal python3[2648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:36 np0005467075.novalocal python3[2672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:36 np0005467075.novalocal python3[2696]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:37 np0005467075.novalocal python3[2720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:37 np0005467075.novalocal python3[2744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:37 np0005467075.novalocal python3[2768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:38 np0005467075.novalocal python3[2792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:17:40 np0005467075.novalocal sudo[2816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmsfnxskmekrozowthbnmxpltfnbupie ; /usr/bin/python3'
Oct 02 18:17:40 np0005467075.novalocal sudo[2816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:17:41 np0005467075.novalocal python3[2818]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 02 18:17:41 np0005467075.novalocal systemd[1]: Starting Time & Date Service...
Oct 02 18:17:41 np0005467075.novalocal systemd[1]: Started Time & Date Service.
Oct 02 18:17:41 np0005467075.novalocal systemd-timedated[2820]: Changed time zone to 'UTC' (UTC).
Oct 02 18:17:41 np0005467075.novalocal sudo[2816]: pam_unix(sudo:session): session closed for user root
Oct 02 18:17:41 np0005467075.novalocal sudo[2847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnokyhspvoawxexsahgyqcudatavobog ; /usr/bin/python3'
Oct 02 18:17:41 np0005467075.novalocal sudo[2847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:17:41 np0005467075.novalocal python3[2849]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:41 np0005467075.novalocal sudo[2847]: pam_unix(sudo:session): session closed for user root
Oct 02 18:17:42 np0005467075.novalocal python3[2925]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:17:42 np0005467075.novalocal python3[2996]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759429061.8834236-153-244493898474974/source _original_basename=tmpscrm867s follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:43 np0005467075.novalocal python3[3096]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:17:43 np0005467075.novalocal python3[3167]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759429062.858138-183-278529046953742/source _original_basename=tmp0uwnklcq follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:44 np0005467075.novalocal sudo[3267]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zghagtbpzthcbzkbhrhuijodibbyuamf ; /usr/bin/python3'
Oct 02 18:17:44 np0005467075.novalocal sudo[3267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:17:44 np0005467075.novalocal python3[3269]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:17:44 np0005467075.novalocal sudo[3267]: pam_unix(sudo:session): session closed for user root
Oct 02 18:17:44 np0005467075.novalocal sudo[3340]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnuvwctvqelpboeysgllfszqjqrpijod ; /usr/bin/python3'
Oct 02 18:17:44 np0005467075.novalocal sudo[3340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:17:44 np0005467075.novalocal python3[3342]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759429063.965587-231-270848684513472/source _original_basename=tmpbwg9ve0u follow=False checksum=420e3a2f9d15a75f0a2d48d73e892351a51b8b4f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:44 np0005467075.novalocal sudo[3340]: pam_unix(sudo:session): session closed for user root
Oct 02 18:17:45 np0005467075.novalocal python3[3391]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:17:45 np0005467075.novalocal python3[3419]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:17:45 np0005467075.novalocal sudo[3497]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odhaavngvpfidmbnjveqckylyygnckwh ; /usr/bin/python3'
Oct 02 18:17:45 np0005467075.novalocal sudo[3497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:17:46 np0005467075.novalocal python3[3499]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:17:46 np0005467075.novalocal sudo[3497]: pam_unix(sudo:session): session closed for user root
Oct 02 18:17:46 np0005467075.novalocal sshd-session[3367]: Connection closed by 60.13.6.195 port 29600
Oct 02 18:17:46 np0005467075.novalocal sudo[3570]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypalkfwesyfljzcurjovvsftgwgfsefq ; /usr/bin/python3'
Oct 02 18:17:46 np0005467075.novalocal sudo[3570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:17:46 np0005467075.novalocal python3[3572]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759429065.7315738-273-143832795134108/source _original_basename=tmpndnyw0sd follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:17:46 np0005467075.novalocal sudo[3570]: pam_unix(sudo:session): session closed for user root
Oct 02 18:17:46 np0005467075.novalocal sudo[3621]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zffptbxupjkqxcjrgxydmtwmhcauozru ; /usr/bin/python3'
Oct 02 18:17:46 np0005467075.novalocal sudo[3621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:17:47 np0005467075.novalocal python3[3623]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-1754-bfc9-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:17:47 np0005467075.novalocal sudo[3621]: pam_unix(sudo:session): session closed for user root
Oct 02 18:17:47 np0005467075.novalocal python3[3651]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-1754-bfc9-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 02 18:17:48 np0005467075.novalocal python3[3679]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:18:01 np0005467075.novalocal sshd-session[3400]: Connection closed by 118.212.122.137 port 57957 [preauth]
Oct 02 18:18:06 np0005467075.novalocal sudo[3703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkdidpnpjkojdccivfyuenlzabyvjvte ; /usr/bin/python3'
Oct 02 18:18:06 np0005467075.novalocal sudo[3703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:18:06 np0005467075.novalocal python3[3705]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:18:06 np0005467075.novalocal sudo[3703]: pam_unix(sudo:session): session closed for user root
Oct 02 18:18:11 np0005467075.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 18:18:40 np0005467075.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 02 18:18:40 np0005467075.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 02 18:18:40 np0005467075.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 02 18:18:40 np0005467075.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 02 18:18:40 np0005467075.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 02 18:18:40 np0005467075.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 02 18:18:40 np0005467075.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 02 18:18:40 np0005467075.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 02 18:18:40 np0005467075.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 02 18:18:40 np0005467075.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 02 18:18:40 np0005467075.novalocal NetworkManager[861]: <info>  [1759429120.8133] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 02 18:18:40 np0005467075.novalocal systemd-udevd[3709]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 18:18:40 np0005467075.novalocal NetworkManager[861]: <info>  [1759429120.8303] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:18:40 np0005467075.novalocal NetworkManager[861]: <info>  [1759429120.8324] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 02 18:18:40 np0005467075.novalocal NetworkManager[861]: <info>  [1759429120.8327] device (eth1): carrier: link connected
Oct 02 18:18:40 np0005467075.novalocal NetworkManager[861]: <info>  [1759429120.8328] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 02 18:18:40 np0005467075.novalocal NetworkManager[861]: <info>  [1759429120.8332] policy: auto-activating connection 'Wired connection 1' (d4b15060-b769-328b-b6bc-44454af900b8)
Oct 02 18:18:40 np0005467075.novalocal NetworkManager[861]: <info>  [1759429120.8335] device (eth1): Activation: starting connection 'Wired connection 1' (d4b15060-b769-328b-b6bc-44454af900b8)
Oct 02 18:18:40 np0005467075.novalocal NetworkManager[861]: <info>  [1759429120.8336] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:18:40 np0005467075.novalocal NetworkManager[861]: <info>  [1759429120.8338] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:18:40 np0005467075.novalocal NetworkManager[861]: <info>  [1759429120.8341] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:18:40 np0005467075.novalocal NetworkManager[861]: <info>  [1759429120.8345] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:18:41 np0005467075.novalocal python3[3736]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-0fcc-2c06-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:18:48 np0005467075.novalocal sudo[3814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shfaeioaiwhgnasuliavipkqlbpzqcai ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 18:18:48 np0005467075.novalocal sudo[3814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:18:48 np0005467075.novalocal python3[3816]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:18:48 np0005467075.novalocal sudo[3814]: pam_unix(sudo:session): session closed for user root
Oct 02 18:18:48 np0005467075.novalocal sudo[3887]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwivkvvqdhdxlsogevrjbayixcurslkt ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 18:18:48 np0005467075.novalocal sudo[3887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:18:49 np0005467075.novalocal python3[3889]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759429128.2831383-102-279587184388081/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=855b2931cae457c8e44801f1c22da2674156d107 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:18:49 np0005467075.novalocal sudo[3887]: pam_unix(sudo:session): session closed for user root
Oct 02 18:18:49 np0005467075.novalocal sudo[3937]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wveyfkgpepavfjjgnuyvnwhbmafgzpym ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 18:18:49 np0005467075.novalocal sudo[3937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:18:49 np0005467075.novalocal python3[3939]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:18:49 np0005467075.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 02 18:18:49 np0005467075.novalocal systemd[1]: Stopped Network Manager Wait Online.
Oct 02 18:18:49 np0005467075.novalocal systemd[1]: Stopping Network Manager Wait Online...
Oct 02 18:18:49 np0005467075.novalocal systemd[1]: Stopping Network Manager...
Oct 02 18:18:49 np0005467075.novalocal NetworkManager[861]: <info>  [1759429129.9313] caught SIGTERM, shutting down normally.
Oct 02 18:18:49 np0005467075.novalocal NetworkManager[861]: <info>  [1759429129.9327] dhcp4 (eth0): canceled DHCP transaction
Oct 02 18:18:49 np0005467075.novalocal NetworkManager[861]: <info>  [1759429129.9328] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:18:49 np0005467075.novalocal NetworkManager[861]: <info>  [1759429129.9328] dhcp4 (eth0): state changed no lease
Oct 02 18:18:49 np0005467075.novalocal NetworkManager[861]: <info>  [1759429129.9335] manager: NetworkManager state is now CONNECTING
Oct 02 18:18:49 np0005467075.novalocal NetworkManager[861]: <info>  [1759429129.9442] dhcp4 (eth1): canceled DHCP transaction
Oct 02 18:18:49 np0005467075.novalocal NetworkManager[861]: <info>  [1759429129.9443] dhcp4 (eth1): state changed no lease
Oct 02 18:18:49 np0005467075.novalocal NetworkManager[861]: <info>  [1759429129.9497] exiting (success)
Oct 02 18:18:49 np0005467075.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 18:18:49 np0005467075.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 18:18:49 np0005467075.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 02 18:18:49 np0005467075.novalocal systemd[1]: Stopped Network Manager.
Oct 02 18:18:49 np0005467075.novalocal systemd[1]: NetworkManager.service: Consumed 1.927s CPU time, 9.9M memory peak.
Oct 02 18:18:49 np0005467075.novalocal systemd[1]: Starting Network Manager...
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.0373] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:cafe0c2a-2d4b-4517-8a8b-b22a7ae0a086)
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.0375] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.0435] manager[0x5626c0e79070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 02 18:18:50 np0005467075.novalocal systemd[1]: Starting Hostname Service...
Oct 02 18:18:50 np0005467075.novalocal systemd[1]: Started Hostname Service.
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1237] hostname: hostname: using hostnamed
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1241] hostname: static hostname changed from (none) to "np0005467075.novalocal"
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1249] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1256] manager[0x5626c0e79070]: rfkill: Wi-Fi hardware radio set enabled
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1256] manager[0x5626c0e79070]: rfkill: WWAN hardware radio set enabled
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1300] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1300] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1302] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1303] manager: Networking is enabled by state file
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1307] settings: Loaded settings plugin: keyfile (internal)
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1313] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1358] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1375] dhcp: init: Using DHCP client 'internal'
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1379] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1388] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1399] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1414] device (lo): Activation: starting connection 'lo' (aeebfd8a-b15e-4738-a7c9-24998c83f095)
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1425] device (eth0): carrier: link connected
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1432] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1441] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1442] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1457] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1467] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1476] device (eth1): carrier: link connected
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1482] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1489] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (d4b15060-b769-328b-b6bc-44454af900b8) (indicated)
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1489] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1499] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1508] device (eth1): Activation: starting connection 'Wired connection 1' (d4b15060-b769-328b-b6bc-44454af900b8)
Oct 02 18:18:50 np0005467075.novalocal systemd[1]: Started Network Manager.
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1517] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1522] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1527] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1530] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1533] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1536] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1552] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1555] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1558] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1566] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1569] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1580] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1583] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1600] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1605] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1611] device (lo): Activation: successful, device activated.
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1646] dhcp4 (eth0): state changed new lease, address=38.102.83.147
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1653] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 02 18:18:50 np0005467075.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1728] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1746] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1748] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1751] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1755] device (eth0): Activation: successful, device activated.
Oct 02 18:18:50 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429130.1760] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 02 18:18:50 np0005467075.novalocal sudo[3937]: pam_unix(sudo:session): session closed for user root
Oct 02 18:18:50 np0005467075.novalocal python3[4025]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-0fcc-2c06-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:18:51 np0005467075.novalocal sshd-session[3994]: Received disconnect from 141.98.10.225 port 22644:11:  [preauth]
Oct 02 18:18:51 np0005467075.novalocal sshd-session[3994]: Disconnected from authenticating user root 141.98.10.225 port 22644 [preauth]
Oct 02 18:19:00 np0005467075.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 18:19:20 np0005467075.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3196] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 18:19:35 np0005467075.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 18:19:35 np0005467075.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3429] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3433] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3443] device (eth1): Activation: successful, device activated.
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3452] manager: startup complete
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3455] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <warn>  [1759429175.3466] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3476] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 02 18:19:35 np0005467075.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3603] dhcp4 (eth1): canceled DHCP transaction
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3603] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3604] dhcp4 (eth1): state changed no lease
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3625] policy: auto-activating connection 'ci-private-network' (ddf31ed9-79b0-5b7b-a7a3-0e250874a52d)
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3632] device (eth1): Activation: starting connection 'ci-private-network' (ddf31ed9-79b0-5b7b-a7a3-0e250874a52d)
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3634] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3638] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3649] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3662] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3707] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3711] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:19:35 np0005467075.novalocal NetworkManager[3949]: <info>  [1759429175.3721] device (eth1): Activation: successful, device activated.
Oct 02 18:19:43 np0005467075.novalocal sudo[4129]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzdllpkzfuuffwmvbnlkdzbajkcjszau ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 18:19:43 np0005467075.novalocal sudo[4129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:19:43 np0005467075.novalocal python3[4131]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:19:43 np0005467075.novalocal sudo[4129]: pam_unix(sudo:session): session closed for user root
Oct 02 18:19:44 np0005467075.novalocal sudo[4202]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwnnmycqtcytntwwwtayszikpflscysj ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 18:19:44 np0005467075.novalocal sudo[4202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:19:44 np0005467075.novalocal python3[4204]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759429183.5115047-259-144964354243036/source _original_basename=tmpc0xdoi0x follow=False checksum=8c6b2b699c80c58ff7f57fcff886ca986d119ec2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:19:44 np0005467075.novalocal sudo[4202]: pam_unix(sudo:session): session closed for user root
Oct 02 18:19:45 np0005467075.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 18:19:48 np0005467075.novalocal systemd[1063]: Starting Mark boot as successful...
Oct 02 18:19:48 np0005467075.novalocal systemd[1063]: Finished Mark boot as successful.
Oct 02 18:20:44 np0005467075.novalocal sshd-session[1072]: Received disconnect from 38.102.83.114 port 60914:11: disconnected by user
Oct 02 18:20:44 np0005467075.novalocal sshd-session[1072]: Disconnected from user zuul 38.102.83.114 port 60914
Oct 02 18:20:44 np0005467075.novalocal sshd-session[1059]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:20:44 np0005467075.novalocal systemd-logind[798]: Session 1 logged out. Waiting for processes to exit.
Oct 02 18:22:48 np0005467075.novalocal systemd[1063]: Created slice User Background Tasks Slice.
Oct 02 18:22:48 np0005467075.novalocal systemd[1063]: Starting Cleanup of User's Temporary Files and Directories...
Oct 02 18:22:48 np0005467075.novalocal systemd[1063]: Finished Cleanup of User's Temporary Files and Directories.
Oct 02 18:25:19 np0005467075.novalocal sshd-session[4236]: Received disconnect from 91.224.92.108 port 15400:11:  [preauth]
Oct 02 18:25:19 np0005467075.novalocal sshd-session[4236]: Disconnected from authenticating user root 91.224.92.108 port 15400 [preauth]
Oct 02 18:25:33 np0005467075.novalocal sshd-session[4239]: Accepted publickey for zuul from 38.102.83.114 port 38056 ssh2: RSA SHA256:Cqypmgs6gPK5am/EoWoj7JixM3d03JX7hfQ1lfNOky8
Oct 02 18:25:33 np0005467075.novalocal systemd-logind[798]: New session 3 of user zuul.
Oct 02 18:25:33 np0005467075.novalocal systemd[1]: Started Session 3 of User zuul.
Oct 02 18:25:33 np0005467075.novalocal sshd-session[4239]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:25:33 np0005467075.novalocal sudo[4266]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thminyzdzixszhhvarzkxqnxdneikpmw ; /usr/bin/python3'
Oct 02 18:25:33 np0005467075.novalocal sudo[4266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:25:33 np0005467075.novalocal python3[4268]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-a0b1-8451-000000001d02-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:25:33 np0005467075.novalocal sudo[4266]: pam_unix(sudo:session): session closed for user root
Oct 02 18:25:34 np0005467075.novalocal sudo[4295]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apyclabtaoebchypwefyyjznofkfpvhn ; /usr/bin/python3'
Oct 02 18:25:34 np0005467075.novalocal sudo[4295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:25:34 np0005467075.novalocal python3[4297]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:25:34 np0005467075.novalocal sudo[4295]: pam_unix(sudo:session): session closed for user root
Oct 02 18:25:34 np0005467075.novalocal sudo[4321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcwujipwcjbjpvwuzpgrcopwdwmyahax ; /usr/bin/python3'
Oct 02 18:25:34 np0005467075.novalocal sudo[4321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:25:34 np0005467075.novalocal python3[4323]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:25:34 np0005467075.novalocal sudo[4321]: pam_unix(sudo:session): session closed for user root
Oct 02 18:25:35 np0005467075.novalocal sudo[4347]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfzhoelybnkoriliqpjcbmzblcnwlbol ; /usr/bin/python3'
Oct 02 18:25:35 np0005467075.novalocal sudo[4347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:25:35 np0005467075.novalocal python3[4349]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:25:35 np0005467075.novalocal sudo[4347]: pam_unix(sudo:session): session closed for user root
Oct 02 18:25:35 np0005467075.novalocal sudo[4373]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyxkhmnznnbsckzdwhynddgaigrwxszw ; /usr/bin/python3'
Oct 02 18:25:35 np0005467075.novalocal sudo[4373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:25:35 np0005467075.novalocal python3[4375]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:25:35 np0005467075.novalocal sudo[4373]: pam_unix(sudo:session): session closed for user root
Oct 02 18:25:35 np0005467075.novalocal sudo[4399]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zidhgmkjffbolguospowkqnbfozmfcvm ; /usr/bin/python3'
Oct 02 18:25:35 np0005467075.novalocal sudo[4399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:25:36 np0005467075.novalocal python3[4401]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:25:36 np0005467075.novalocal python3[4401]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 02 18:25:36 np0005467075.novalocal sudo[4399]: pam_unix(sudo:session): session closed for user root
Oct 02 18:25:36 np0005467075.novalocal sudo[4425]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utvkqgwvcxuadxkohegcsykynudqiqfm ; /usr/bin/python3'
Oct 02 18:25:36 np0005467075.novalocal sudo[4425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:25:36 np0005467075.novalocal python3[4427]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 18:25:36 np0005467075.novalocal systemd[1]: Reloading.
Oct 02 18:25:37 np0005467075.novalocal systemd-rc-local-generator[4447]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:25:37 np0005467075.novalocal sudo[4425]: pam_unix(sudo:session): session closed for user root
Oct 02 18:25:38 np0005467075.novalocal sudo[4482]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzlucbefmwdvulziftvlmqjsdnlnlvaq ; /usr/bin/python3'
Oct 02 18:25:38 np0005467075.novalocal sudo[4482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:25:38 np0005467075.novalocal python3[4484]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 02 18:25:38 np0005467075.novalocal sudo[4482]: pam_unix(sudo:session): session closed for user root
Oct 02 18:25:38 np0005467075.novalocal sudo[4508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oohropplxmpcjbhznifcqxxvevqilydh ; /usr/bin/python3'
Oct 02 18:25:38 np0005467075.novalocal sudo[4508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:25:39 np0005467075.novalocal python3[4510]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:25:39 np0005467075.novalocal sudo[4508]: pam_unix(sudo:session): session closed for user root
Oct 02 18:25:39 np0005467075.novalocal sudo[4536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teflwvzvdtqxytqyqwwmbjnyomfjcdpe ; /usr/bin/python3'
Oct 02 18:25:39 np0005467075.novalocal sudo[4536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:25:39 np0005467075.novalocal python3[4538]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:25:39 np0005467075.novalocal sudo[4536]: pam_unix(sudo:session): session closed for user root
Oct 02 18:25:39 np0005467075.novalocal sudo[4564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drjdraclyjlgslycmtdtfiniqjxkohpa ; /usr/bin/python3'
Oct 02 18:25:39 np0005467075.novalocal sudo[4564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:25:39 np0005467075.novalocal python3[4566]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:25:39 np0005467075.novalocal sudo[4564]: pam_unix(sudo:session): session closed for user root
Oct 02 18:25:39 np0005467075.novalocal sudo[4592]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbynxknwrycoccyhhjrptmyxpytfyvhz ; /usr/bin/python3'
Oct 02 18:25:39 np0005467075.novalocal sudo[4592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:25:39 np0005467075.novalocal python3[4594]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:25:39 np0005467075.novalocal sudo[4592]: pam_unix(sudo:session): session closed for user root
Oct 02 18:25:40 np0005467075.novalocal python3[4621]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-a0b1-8451-000000001d08-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:25:40 np0005467075.novalocal python3[4651]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:25:43 np0005467075.novalocal sshd-session[4242]: Connection closed by 38.102.83.114 port 38056
Oct 02 18:25:43 np0005467075.novalocal sshd-session[4239]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:25:43 np0005467075.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Oct 02 18:25:43 np0005467075.novalocal systemd[1]: session-3.scope: Consumed 3.806s CPU time.
Oct 02 18:25:43 np0005467075.novalocal systemd-logind[798]: Session 3 logged out. Waiting for processes to exit.
Oct 02 18:25:43 np0005467075.novalocal systemd-logind[798]: Removed session 3.
Oct 02 18:25:45 np0005467075.novalocal sshd-session[4657]: Accepted publickey for zuul from 38.102.83.114 port 55302 ssh2: RSA SHA256:Cqypmgs6gPK5am/EoWoj7JixM3d03JX7hfQ1lfNOky8
Oct 02 18:25:45 np0005467075.novalocal systemd-logind[798]: New session 4 of user zuul.
Oct 02 18:25:45 np0005467075.novalocal systemd[1]: Started Session 4 of User zuul.
Oct 02 18:25:45 np0005467075.novalocal sshd-session[4657]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:25:45 np0005467075.novalocal sudo[4684]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfzhmqijbulufmequfocuvhhwcujnzgd ; /usr/bin/python3'
Oct 02 18:25:45 np0005467075.novalocal sudo[4684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:25:45 np0005467075.novalocal python3[4686]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 02 18:26:04 np0005467075.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 02 18:26:04 np0005467075.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:26:04 np0005467075.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 18:26:04 np0005467075.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:26:04 np0005467075.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:26:04 np0005467075.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:26:04 np0005467075.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:26:04 np0005467075.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:26:13 np0005467075.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 02 18:26:13 np0005467075.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:26:13 np0005467075.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 18:26:13 np0005467075.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:26:13 np0005467075.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:26:13 np0005467075.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:26:13 np0005467075.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:26:13 np0005467075.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:26:22 np0005467075.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 02 18:26:22 np0005467075.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:26:22 np0005467075.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 18:26:22 np0005467075.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:26:22 np0005467075.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:26:22 np0005467075.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:26:22 np0005467075.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:26:22 np0005467075.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:26:23 np0005467075.novalocal setsebool[4749]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 02 18:26:23 np0005467075.novalocal setsebool[4749]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 02 18:26:33 np0005467075.novalocal kernel: SELinux:  Converting 366 SID table entries...
Oct 02 18:26:33 np0005467075.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:26:33 np0005467075.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 18:26:33 np0005467075.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:26:33 np0005467075.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:26:33 np0005467075.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:26:33 np0005467075.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:26:33 np0005467075.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:26:51 np0005467075.novalocal dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 02 18:26:52 np0005467075.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 18:26:52 np0005467075.novalocal systemd[1]: Starting man-db-cache-update.service...
Oct 02 18:26:52 np0005467075.novalocal systemd[1]: Reloading.
Oct 02 18:26:52 np0005467075.novalocal systemd-rc-local-generator[5505]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:26:52 np0005467075.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 18:26:53 np0005467075.novalocal systemd[1]: Starting PackageKit Daemon...
Oct 02 18:26:53 np0005467075.novalocal PackageKit[6225]: daemon start
Oct 02 18:26:53 np0005467075.novalocal systemd[1]: Starting Authorization Manager...
Oct 02 18:26:53 np0005467075.novalocal polkitd[6312]: Started polkitd version 0.117
Oct 02 18:26:53 np0005467075.novalocal polkitd[6312]: Loading rules from directory /etc/polkit-1/rules.d
Oct 02 18:26:53 np0005467075.novalocal polkitd[6312]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 02 18:26:53 np0005467075.novalocal polkitd[6312]: Finished loading, compiling and executing 3 rules
Oct 02 18:26:53 np0005467075.novalocal systemd[1]: Started Authorization Manager.
Oct 02 18:26:53 np0005467075.novalocal polkitd[6312]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 02 18:26:53 np0005467075.novalocal systemd[1]: Started PackageKit Daemon.
Oct 02 18:26:53 np0005467075.novalocal sudo[4684]: pam_unix(sudo:session): session closed for user root
Oct 02 18:26:56 np0005467075.novalocal python3[8472]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-bce0-81d1-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:26:56 np0005467075.novalocal kernel: evm: overlay not supported
Oct 02 18:26:57 np0005467075.novalocal systemd[1063]: Starting D-Bus User Message Bus...
Oct 02 18:26:57 np0005467075.novalocal dbus-broker-launch[9257]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 02 18:26:57 np0005467075.novalocal dbus-broker-launch[9257]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 02 18:26:57 np0005467075.novalocal systemd[1063]: Started D-Bus User Message Bus.
Oct 02 18:26:57 np0005467075.novalocal dbus-broker-lau[9257]: Ready
Oct 02 18:26:57 np0005467075.novalocal systemd[1063]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 02 18:26:57 np0005467075.novalocal systemd[1063]: Created slice Slice /user.
Oct 02 18:26:57 np0005467075.novalocal systemd[1063]: podman-9132.scope: unit configures an IP firewall, but not running as root.
Oct 02 18:26:57 np0005467075.novalocal systemd[1063]: (This warning is only shown for the first unit using IP firewalling.)
Oct 02 18:26:57 np0005467075.novalocal systemd[1063]: Started podman-9132.scope.
Oct 02 18:26:57 np0005467075.novalocal systemd[1063]: Started podman-pause-27158cce.scope.
Oct 02 18:26:57 np0005467075.novalocal sudo[9690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clharxgdnhodntfcsxfqwfycpatxaqwo ; /usr/bin/python3'
Oct 02 18:26:57 np0005467075.novalocal sudo[9690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:26:57 np0005467075.novalocal python3[9713]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                      location = "38.102.83.39:5001"
                                                      insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                      location = "38.102.83.39:5001"
                                                      insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:26:57 np0005467075.novalocal sudo[9690]: pam_unix(sudo:session): session closed for user root
Oct 02 18:26:58 np0005467075.novalocal sshd-session[4660]: Connection closed by 38.102.83.114 port 55302
Oct 02 18:26:58 np0005467075.novalocal sshd-session[4657]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:26:58 np0005467075.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Oct 02 18:26:58 np0005467075.novalocal systemd[1]: session-4.scope: Consumed 58.961s CPU time.
Oct 02 18:26:58 np0005467075.novalocal systemd-logind[798]: Session 4 logged out. Waiting for processes to exit.
Oct 02 18:26:58 np0005467075.novalocal systemd-logind[798]: Removed session 4.
Oct 02 18:27:19 np0005467075.novalocal sshd-session[16717]: Unable to negotiate with 38.102.83.227 port 42850: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 02 18:27:19 np0005467075.novalocal sshd-session[16715]: Connection closed by 38.102.83.227 port 42818 [preauth]
Oct 02 18:27:19 np0005467075.novalocal sshd-session[16712]: Unable to negotiate with 38.102.83.227 port 42836: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 02 18:27:19 np0005467075.novalocal sshd-session[16719]: Connection closed by 38.102.83.227 port 42822 [preauth]
Oct 02 18:27:19 np0005467075.novalocal sshd-session[16718]: Unable to negotiate with 38.102.83.227 port 42856: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 02 18:27:24 np0005467075.novalocal sshd-session[17861]: Accepted publickey for zuul from 38.102.83.114 port 53298 ssh2: RSA SHA256:Cqypmgs6gPK5am/EoWoj7JixM3d03JX7hfQ1lfNOky8
Oct 02 18:27:24 np0005467075.novalocal systemd-logind[798]: New session 5 of user zuul.
Oct 02 18:27:24 np0005467075.novalocal systemd[1]: Started Session 5 of User zuul.
Oct 02 18:27:24 np0005467075.novalocal sshd-session[17861]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:27:24 np0005467075.novalocal python3[17945]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIXtQa7mzxdQ2Nj+f+/LO8v7wW5kaEBbIpzYzU8sjszAvm0PDiPBXY4oD12pdVodmIUzT0IWEk+N4EPeUhfzc1U= zuul@np0005467074.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:27:24 np0005467075.novalocal sudo[18081]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alxtsqvpgcoriapxpciocomugkrmljox ; /usr/bin/python3'
Oct 02 18:27:24 np0005467075.novalocal sudo[18081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:27:25 np0005467075.novalocal python3[18090]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIXtQa7mzxdQ2Nj+f+/LO8v7wW5kaEBbIpzYzU8sjszAvm0PDiPBXY4oD12pdVodmIUzT0IWEk+N4EPeUhfzc1U= zuul@np0005467074.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:27:25 np0005467075.novalocal sudo[18081]: pam_unix(sudo:session): session closed for user root
Oct 02 18:27:25 np0005467075.novalocal sudo[18302]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqoctprjmltmekmipruvdouturpvlsmx ; /usr/bin/python3'
Oct 02 18:27:25 np0005467075.novalocal sudo[18302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:27:25 np0005467075.novalocal python3[18308]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005467075.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 02 18:27:25 np0005467075.novalocal useradd[18367]: new group: name=cloud-admin, GID=1002
Oct 02 18:27:25 np0005467075.novalocal useradd[18367]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Oct 02 18:27:25 np0005467075.novalocal sudo[18302]: pam_unix(sudo:session): session closed for user root
Oct 02 18:27:26 np0005467075.novalocal sudo[18480]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjoyvgocgjejmjbjbgycexdqqxwclkwq ; /usr/bin/python3'
Oct 02 18:27:26 np0005467075.novalocal sudo[18480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:27:26 np0005467075.novalocal python3[18486]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIXtQa7mzxdQ2Nj+f+/LO8v7wW5kaEBbIpzYzU8sjszAvm0PDiPBXY4oD12pdVodmIUzT0IWEk+N4EPeUhfzc1U= zuul@np0005467074.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 18:27:26 np0005467075.novalocal sudo[18480]: pam_unix(sudo:session): session closed for user root
Oct 02 18:27:26 np0005467075.novalocal sudo[18700]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smykabtavzsipxfjsmraytnvmjzujclk ; /usr/bin/python3'
Oct 02 18:27:26 np0005467075.novalocal sudo[18700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:27:26 np0005467075.novalocal python3[18710]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:27:26 np0005467075.novalocal sudo[18700]: pam_unix(sudo:session): session closed for user root
Oct 02 18:27:27 np0005467075.novalocal sudo[18901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjyuydfcxrkedanigqmewzecckwarkav ; /usr/bin/python3'
Oct 02 18:27:27 np0005467075.novalocal sudo[18901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:27:27 np0005467075.novalocal python3[18907]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759429646.5152884-135-184240474286936/source _original_basename=tmpha7rzdrw follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:27:27 np0005467075.novalocal sudo[18901]: pam_unix(sudo:session): session closed for user root
Oct 02 18:27:27 np0005467075.novalocal sudo[19112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lilksoudoyzpifuuzfycjcyhaojtvimn ; /usr/bin/python3'
Oct 02 18:27:27 np0005467075.novalocal sudo[19112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:27:28 np0005467075.novalocal python3[19119]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct 02 18:27:28 np0005467075.novalocal systemd[1]: Starting Hostname Service...
Oct 02 18:27:28 np0005467075.novalocal systemd[1]: Started Hostname Service.
Oct 02 18:27:28 np0005467075.novalocal systemd-hostnamed[19194]: Changed pretty hostname to 'compute-0'
Oct 02 18:27:28 compute-0 systemd-hostnamed[19194]: Hostname set to <compute-0> (static)
Oct 02 18:27:28 compute-0 NetworkManager[3949]: <info>  [1759429648.3011] hostname: static hostname changed from "np0005467075.novalocal" to "compute-0"
Oct 02 18:27:28 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 18:27:28 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 18:27:28 compute-0 sudo[19112]: pam_unix(sudo:session): session closed for user root
Oct 02 18:27:28 compute-0 sshd-session[17896]: Connection closed by 38.102.83.114 port 53298
Oct 02 18:27:28 compute-0 sshd-session[17861]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:27:28 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Oct 02 18:27:28 compute-0 systemd[1]: session-5.scope: Consumed 2.628s CPU time.
Oct 02 18:27:28 compute-0 systemd-logind[798]: Session 5 logged out. Waiting for processes to exit.
Oct 02 18:27:28 compute-0 systemd-logind[798]: Removed session 5.
Oct 02 18:27:38 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 18:27:55 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 18:27:55 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 18:27:55 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 16.689s CPU time.
Oct 02 18:27:55 compute-0 systemd[1]: run-rf6f5cb389ffc4afebc0192f7ca66c44e.service: Deactivated successfully.
Oct 02 18:27:58 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 18:29:48 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 02 18:29:48 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 02 18:29:48 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 02 18:29:48 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 02 18:30:27 compute-0 sshd-session[26563]: Invalid user admin from 139.19.117.197 port 45068
Oct 02 18:30:27 compute-0 sshd-session[26563]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Oct 02 18:30:37 compute-0 sshd-session[26563]: Connection closed by invalid user admin 139.19.117.197 port 45068 [preauth]
Oct 02 18:31:39 compute-0 sshd-session[26566]: Received disconnect from 193.46.255.20 port 22240:11:  [preauth]
Oct 02 18:31:39 compute-0 sshd-session[26566]: Disconnected from authenticating user root 193.46.255.20 port 22240 [preauth]
Oct 02 18:31:57 compute-0 sshd-session[26568]: Accepted publickey for zuul from 38.102.83.227 port 37012 ssh2: RSA SHA256:Cqypmgs6gPK5am/EoWoj7JixM3d03JX7hfQ1lfNOky8
Oct 02 18:31:57 compute-0 systemd-logind[798]: New session 6 of user zuul.
Oct 02 18:31:57 compute-0 systemd[1]: Started Session 6 of User zuul.
Oct 02 18:31:57 compute-0 sshd-session[26568]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:31:57 compute-0 python3[26644]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:31:58 compute-0 PackageKit[6225]: daemon quit
Oct 02 18:31:58 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 18:31:59 compute-0 sudo[26758]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvebdypsxtwcojhcietpduteiezvqkzf ; /usr/bin/python3'
Oct 02 18:31:59 compute-0 sudo[26758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:31:59 compute-0 python3[26760]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:31:59 compute-0 sudo[26758]: pam_unix(sudo:session): session closed for user root
Oct 02 18:31:59 compute-0 sudo[26831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvqbwxbcpzcohpkduspcbnfknjwfpyls ; /usr/bin/python3'
Oct 02 18:31:59 compute-0 sudo[26831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:31:59 compute-0 python3[26833]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759429918.9285965-30238-93281227899800/source mode=0755 _original_basename=delorean.repo follow=False checksum=bb4c2ff9dad546f135d54d9729ea11b84117755d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:31:59 compute-0 sudo[26831]: pam_unix(sudo:session): session closed for user root
Oct 02 18:31:59 compute-0 sudo[26857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxrkleeeqmiquhjlbvtzxrbpqkoqrkjt ; /usr/bin/python3'
Oct 02 18:31:59 compute-0 sudo[26857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:31:59 compute-0 python3[26859]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:31:59 compute-0 sudo[26857]: pam_unix(sudo:session): session closed for user root
Oct 02 18:32:00 compute-0 sudo[26930]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzjpmkxvdmwrqrzzidegynmezuengypl ; /usr/bin/python3'
Oct 02 18:32:00 compute-0 sudo[26930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:32:00 compute-0 python3[26932]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759429918.9285965-30238-93281227899800/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:32:00 compute-0 sudo[26930]: pam_unix(sudo:session): session closed for user root
Oct 02 18:32:00 compute-0 sudo[26956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwcfynavdkxtympgwjqwhtaznufrrpjt ; /usr/bin/python3'
Oct 02 18:32:00 compute-0 sudo[26956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:32:00 compute-0 python3[26958]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:32:00 compute-0 sudo[26956]: pam_unix(sudo:session): session closed for user root
Oct 02 18:32:00 compute-0 sudo[27029]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwtosawbefmptsygffzlzplxeipndskw ; /usr/bin/python3'
Oct 02 18:32:00 compute-0 sudo[27029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:32:00 compute-0 python3[27031]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759429918.9285965-30238-93281227899800/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:32:00 compute-0 sudo[27029]: pam_unix(sudo:session): session closed for user root
Oct 02 18:32:01 compute-0 sudo[27055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmxsmvxuyatkxzconceehhqtbtfdbfzs ; /usr/bin/python3'
Oct 02 18:32:01 compute-0 sudo[27055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:32:01 compute-0 python3[27057]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:32:01 compute-0 sudo[27055]: pam_unix(sudo:session): session closed for user root
Oct 02 18:32:01 compute-0 sudo[27128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haqopuyuqaogtqmzekxhlechmiqorbqq ; /usr/bin/python3'
Oct 02 18:32:01 compute-0 sudo[27128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:32:01 compute-0 python3[27130]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759429918.9285965-30238-93281227899800/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:32:01 compute-0 sudo[27128]: pam_unix(sudo:session): session closed for user root
Oct 02 18:32:01 compute-0 sudo[27154]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbxvnbksxbsqacnyhsqjjufiayqwtspg ; /usr/bin/python3'
Oct 02 18:32:01 compute-0 sudo[27154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:32:01 compute-0 python3[27156]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:32:01 compute-0 sudo[27154]: pam_unix(sudo:session): session closed for user root
Oct 02 18:32:02 compute-0 sudo[27227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfeyelexrntpgtodafvqvobxykfrcxoo ; /usr/bin/python3'
Oct 02 18:32:02 compute-0 sudo[27227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:32:02 compute-0 python3[27229]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759429918.9285965-30238-93281227899800/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:32:02 compute-0 sudo[27227]: pam_unix(sudo:session): session closed for user root
Oct 02 18:32:02 compute-0 sudo[27253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubditcmommxqlwtrzgxmymdzdtkpzdtk ; /usr/bin/python3'
Oct 02 18:32:02 compute-0 sudo[27253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:32:02 compute-0 python3[27255]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:32:02 compute-0 sudo[27253]: pam_unix(sudo:session): session closed for user root
Oct 02 18:32:02 compute-0 sudo[27326]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyojaxfyomktfigoyfohtmbncgpotowv ; /usr/bin/python3'
Oct 02 18:32:02 compute-0 sudo[27326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:32:02 compute-0 python3[27328]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759429918.9285965-30238-93281227899800/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:32:02 compute-0 sudo[27326]: pam_unix(sudo:session): session closed for user root
Oct 02 18:32:03 compute-0 sudo[27352]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zabxvualfpkpotggjhxhotcfsmednnkt ; /usr/bin/python3'
Oct 02 18:32:03 compute-0 sudo[27352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:32:03 compute-0 python3[27354]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 18:32:03 compute-0 sudo[27352]: pam_unix(sudo:session): session closed for user root
Oct 02 18:32:03 compute-0 sudo[27425]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quskleqgfjfexebkzrwhcraxduxwlfbz ; /usr/bin/python3'
Oct 02 18:32:03 compute-0 sudo[27425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:32:03 compute-0 python3[27427]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759429918.9285965-30238-93281227899800/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=d911291791b114a72daf18f370e91cb1ae300933 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:32:03 compute-0 sudo[27425]: pam_unix(sudo:session): session closed for user root
Oct 02 18:32:06 compute-0 sshd-session[27452]: Connection closed by 192.168.122.11 port 57110 [preauth]
Oct 02 18:32:06 compute-0 sshd-session[27453]: Connection closed by 192.168.122.11 port 57116 [preauth]
Oct 02 18:32:06 compute-0 sshd-session[27454]: Unable to negotiate with 192.168.122.11 port 57118: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 02 18:32:06 compute-0 sshd-session[27456]: Unable to negotiate with 192.168.122.11 port 57126: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 02 18:32:06 compute-0 sshd-session[27457]: Unable to negotiate with 192.168.122.11 port 57138: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 02 18:33:10 compute-0 sshd-session[27463]: banner exchange: Connection from 93.123.109.214 port 47250: invalid format
Oct 02 18:33:10 compute-0 sshd-session[27464]: banner exchange: Connection from 93.123.109.214 port 47264: invalid format
Oct 02 18:34:48 compute-0 python3[27488]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:38:10 compute-0 sshd-session[27491]: Received disconnect from 80.94.93.119 port 40138:11:  [preauth]
Oct 02 18:38:10 compute-0 sshd-session[27491]: Disconnected from authenticating user root 80.94.93.119 port 40138 [preauth]
Oct 02 18:38:39 compute-0 sshd-session[27495]: Connection closed by authenticating user root 160.30.172.107 port 57430 [preauth]
Oct 02 18:38:40 compute-0 sshd-session[27497]: Invalid user  from 49.234.53.181 port 46926
Oct 02 18:38:47 compute-0 sshd-session[27497]: Connection closed by invalid user  49.234.53.181 port 46926 [preauth]
Oct 02 18:39:48 compute-0 sshd-session[26571]: Received disconnect from 38.102.83.227 port 37012:11: disconnected by user
Oct 02 18:39:48 compute-0 sshd-session[26571]: Disconnected from user zuul 38.102.83.227 port 37012
Oct 02 18:39:48 compute-0 sshd-session[26568]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:39:48 compute-0 systemd-logind[798]: Session 6 logged out. Waiting for processes to exit.
Oct 02 18:39:48 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Oct 02 18:39:48 compute-0 systemd[1]: session-6.scope: Consumed 5.429s CPU time.
Oct 02 18:39:48 compute-0 systemd-logind[798]: Removed session 6.
Oct 02 18:44:10 compute-0 sshd-session[27501]: Received disconnect from 193.46.255.20 port 54828:11:  [preauth]
Oct 02 18:44:10 compute-0 sshd-session[27501]: Disconnected from authenticating user root 193.46.255.20 port 54828 [preauth]
Oct 02 18:47:38 compute-0 sshd-session[27505]: Invalid user admin from 78.128.112.74 port 47566
Oct 02 18:47:38 compute-0 sshd-session[27505]: Connection closed by invalid user admin 78.128.112.74 port 47566 [preauth]
Oct 02 18:48:09 compute-0 sshd-session[27507]: Accepted publickey for zuul from 192.168.122.30 port 38518 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 18:48:09 compute-0 systemd-logind[798]: New session 7 of user zuul.
Oct 02 18:48:09 compute-0 systemd[1]: Started Session 7 of User zuul.
Oct 02 18:48:09 compute-0 sshd-session[27507]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:48:10 compute-0 python3.9[27660]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:48:11 compute-0 sudo[27839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofiluoankctbdbltoiijoodfrigckkfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430891.350865-32-12729982445190/AnsiballZ_command.py'
Oct 02 18:48:11 compute-0 sudo[27839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:12 compute-0 python3.9[27841]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:48:19 compute-0 sudo[27839]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:19 compute-0 sshd-session[27510]: Connection closed by 192.168.122.30 port 38518
Oct 02 18:48:19 compute-0 sshd-session[27507]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:48:19 compute-0 systemd-logind[798]: Session 7 logged out. Waiting for processes to exit.
Oct 02 18:48:19 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Oct 02 18:48:19 compute-0 systemd[1]: session-7.scope: Consumed 8.116s CPU time.
Oct 02 18:48:19 compute-0 systemd-logind[798]: Removed session 7.
Oct 02 18:48:25 compute-0 sshd-session[27898]: Accepted publickey for zuul from 192.168.122.30 port 47716 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 18:48:25 compute-0 systemd-logind[798]: New session 8 of user zuul.
Oct 02 18:48:25 compute-0 systemd[1]: Started Session 8 of User zuul.
Oct 02 18:48:25 compute-0 sshd-session[27898]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:48:26 compute-0 python3.9[28051]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:48:26 compute-0 sshd-session[27901]: Connection closed by 192.168.122.30 port 47716
Oct 02 18:48:26 compute-0 sshd-session[27898]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:48:26 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Oct 02 18:48:26 compute-0 systemd-logind[798]: Session 8 logged out. Waiting for processes to exit.
Oct 02 18:48:26 compute-0 systemd-logind[798]: Removed session 8.
Oct 02 18:48:43 compute-0 sshd-session[28079]: Accepted publickey for zuul from 192.168.122.30 port 43764 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 18:48:43 compute-0 systemd-logind[798]: New session 9 of user zuul.
Oct 02 18:48:43 compute-0 systemd[1]: Started Session 9 of User zuul.
Oct 02 18:48:43 compute-0 sshd-session[28079]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:48:44 compute-0 python3.9[28232]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 02 18:48:45 compute-0 python3.9[28406]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:48:46 compute-0 sudo[28556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvqfbzycbpcjrydabkxffvgxxsomgqem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430925.6731696-45-156227420070882/AnsiballZ_command.py'
Oct 02 18:48:46 compute-0 sudo[28556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:46 compute-0 python3.9[28558]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:48:46 compute-0 sudo[28556]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:47 compute-0 sudo[28709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uradgxkecxijjoehevzvmjqnbhccxdnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430926.6776705-57-259576733844863/AnsiballZ_stat.py'
Oct 02 18:48:47 compute-0 sudo[28709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:47 compute-0 python3.9[28711]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:48:47 compute-0 sudo[28709]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:48 compute-0 sudo[28861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asvknklhibyeuedxgmkjimizvukctvvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430927.5631642-65-185360670035787/AnsiballZ_file.py'
Oct 02 18:48:48 compute-0 sudo[28861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:48 compute-0 python3.9[28863]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:48:48 compute-0 sudo[28861]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:48 compute-0 sudo[29013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umndxuikgetxucjbxuwzjeoslgbvpxld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430928.4931538-73-85796675879713/AnsiballZ_stat.py'
Oct 02 18:48:48 compute-0 sudo[29013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:49 compute-0 python3.9[29015]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:48:49 compute-0 sudo[29013]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:49 compute-0 sudo[29136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijuweyxdjrdqbtasswubajfpfrjtrsct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430928.4931538-73-85796675879713/AnsiballZ_copy.py'
Oct 02 18:48:49 compute-0 sudo[29136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:49 compute-0 python3.9[29138]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759430928.4931538-73-85796675879713/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:48:49 compute-0 sudo[29136]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:50 compute-0 sudo[29288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bblfoblxlzbvjrstwvoxhxrukoihpazp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430930.145601-88-44261643989263/AnsiballZ_setup.py'
Oct 02 18:48:50 compute-0 sudo[29288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:50 compute-0 python3.9[29290]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:48:51 compute-0 sudo[29288]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:51 compute-0 sudo[29444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzonmrvulmzhxpzwxajqfpzrzktgbqig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430931.2873588-96-122425938513144/AnsiballZ_file.py'
Oct 02 18:48:51 compute-0 sudo[29444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:48:51 compute-0 python3.9[29446]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:48:51 compute-0 sudo[29444]: pam_unix(sudo:session): session closed for user root
Oct 02 18:48:52 compute-0 python3.9[29596]: ansible-ansible.builtin.service_facts Invoked
Oct 02 18:48:57 compute-0 python3.9[29851]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:48:58 compute-0 python3.9[30001]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:48:59 compute-0 python3.9[30155]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:49:00 compute-0 sudo[30311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezbikxacckjzqeowtssnnpkvygbdudkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430940.0447729-144-37005054737605/AnsiballZ_setup.py'
Oct 02 18:49:00 compute-0 sudo[30311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:00 compute-0 python3.9[30313]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:49:00 compute-0 sudo[30311]: pam_unix(sudo:session): session closed for user root
Oct 02 18:49:01 compute-0 sudo[30395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlxowwyxcnyadtfziqgnipgaefqckdcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759430940.0447729-144-37005054737605/AnsiballZ_dnf.py'
Oct 02 18:49:01 compute-0 sudo[30395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:49:01 compute-0 python3.9[30397]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:49:44 compute-0 systemd[1]: Reloading.
Oct 02 18:49:44 compute-0 systemd-rc-local-generator[30594]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:49:44 compute-0 systemd[1]: Starting dnf makecache...
Oct 02 18:49:44 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 02 18:49:45 compute-0 dnf[30603]: Failed determining last makecache time.
Oct 02 18:49:45 compute-0 systemd[1]: Reloading.
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-openstack-barbican-42b4c41831408a8e323 124 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 157 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-openstack-cinder-1c00d6490d88e436f26ef 191 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-python-stevedore-c4acc5639fd2329372142 176 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 systemd-rc-local-generator[30637]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-python-cloudkitty-tests-tempest-3961dc 162 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-os-net-config-28598c2978b9e2207dd19fc4 193 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 153 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-python-designate-tests-tempest-347fdbc 182 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-openstack-glance-1fd12c29b339f30fe823e 192 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 172 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-openstack-manila-3c01b7181572c95dac462 185 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-python-whitebox-neutron-tests-tempest- 193 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-openstack-octavia-ba397f07a7331190208c 179 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-openstack-watcher-c014f81a8647287f6dcc 187 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-edpm-image-builder-55ba53cf215b14ed95b 175 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-puppet-ceph-b0c245ccde541a63fde0564366 189 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-openstack-swift-dc98a8463506ac520c469a 176 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 systemd[1]: Reloading.
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-python-tempestconf-8515371b7cceebd4282 178 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 dnf[30603]: delorean-openstack-heat-ui-013accbfd179753bc3f0 185 kB/s | 3.0 kB     00:00
Oct 02 18:49:45 compute-0 systemd-rc-local-generator[30692]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:49:45 compute-0 dnf[30603]: CentOS Stream 9 - BaseOS                         67 kB/s | 6.7 kB     00:00
Oct 02 18:49:45 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 02 18:49:45 compute-0 dnf[30603]: CentOS Stream 9 - AppStream                      27 kB/s | 6.8 kB     00:00
Oct 02 18:49:45 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct 02 18:49:45 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct 02 18:49:45 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct 02 18:49:46 compute-0 dnf[30603]: CentOS Stream 9 - CRB                            68 kB/s | 6.6 kB     00:00
Oct 02 18:49:46 compute-0 dnf[30603]: CentOS Stream 9 - Extras packages                29 kB/s | 8.0 kB     00:00
Oct 02 18:49:46 compute-0 dnf[30603]: dlrn-antelope-testing                           108 kB/s | 3.0 kB     00:00
Oct 02 18:49:46 compute-0 dnf[30603]: dlrn-antelope-build-deps                        109 kB/s | 3.0 kB     00:00
Oct 02 18:49:46 compute-0 dnf[30603]: centos9-rabbitmq                                 70 kB/s | 3.0 kB     00:00
Oct 02 18:49:46 compute-0 dnf[30603]: centos9-storage                                  64 kB/s | 3.0 kB     00:00
Oct 02 18:49:46 compute-0 dnf[30603]: centos9-opstools                                 84 kB/s | 3.0 kB     00:00
Oct 02 18:49:46 compute-0 dnf[30603]: NFV SIG OpenvSwitch                              82 kB/s | 3.0 kB     00:00
Oct 02 18:49:46 compute-0 dnf[30603]: repo-setup-centos-appstream                     132 kB/s | 4.4 kB     00:00
Oct 02 18:49:46 compute-0 dnf[30603]: repo-setup-centos-baseos                        167 kB/s | 3.9 kB     00:00
Oct 02 18:49:46 compute-0 dnf[30603]: repo-setup-centos-highavailability               31 kB/s | 3.9 kB     00:00
Oct 02 18:49:46 compute-0 dnf[30603]: repo-setup-centos-powertools                    169 kB/s | 4.3 kB     00:00
Oct 02 18:49:47 compute-0 dnf[30603]: Extra Packages for Enterprise Linux 9 - x86_64  219 kB/s |  34 kB     00:00
Oct 02 18:49:47 compute-0 dnf[30603]: Metadata cache created.
Oct 02 18:49:47 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 02 18:49:47 compute-0 systemd[1]: Finished dnf makecache.
Oct 02 18:49:47 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.840s CPU time.
Oct 02 18:50:38 compute-0 sshd-session[30925]: Received disconnect from 91.224.92.108 port 54118:11:  [preauth]
Oct 02 18:50:38 compute-0 sshd-session[30925]: Disconnected from authenticating user root 91.224.92.108 port 54118 [preauth]
Oct 02 18:50:47 compute-0 kernel: SELinux:  Converting 2714 SID table entries...
Oct 02 18:50:47 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:50:47 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 18:50:47 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:50:47 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:50:47 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:50:47 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:50:47 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:50:47 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 02 18:50:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 18:50:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 18:50:47 compute-0 systemd[1]: Reloading.
Oct 02 18:50:48 compute-0 systemd-rc-local-generator[31039]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:50:48 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 18:50:48 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 02 18:50:48 compute-0 PackageKit[31240]: daemon start
Oct 02 18:50:48 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 02 18:50:48 compute-0 sudo[30395]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:49 compute-0 sudo[31955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zstxbxmjciiyueacuebmshxgjkbobweb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431048.814934-156-261316720300213/AnsiballZ_command.py'
Oct 02 18:50:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 18:50:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 18:50:49 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.370s CPU time.
Oct 02 18:50:49 compute-0 sudo[31955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:49 compute-0 systemd[1]: run-r4bac7a06c85b413fa51939b4b4c2bd3f.service: Deactivated successfully.
Oct 02 18:50:49 compute-0 python3.9[31958]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:50:50 compute-0 sudo[31955]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:51 compute-0 sudo[32237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjsnzafoboxfcoodfacehohojnpmoxqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431050.6010776-164-85054732956710/AnsiballZ_selinux.py'
Oct 02 18:50:51 compute-0 sudo[32237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:51 compute-0 python3.9[32239]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 02 18:50:51 compute-0 sudo[32237]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:52 compute-0 sudo[32389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpgiwxbodbwaiswiezqttogdfsszzocn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431051.9372509-175-162429956135501/AnsiballZ_command.py'
Oct 02 18:50:52 compute-0 sudo[32389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:52 compute-0 python3.9[32391]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 02 18:50:53 compute-0 sudo[32389]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:54 compute-0 sudo[32542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlygggwuzuwviczcsjurtexdbfnigtpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431053.6880383-183-278851556657320/AnsiballZ_file.py'
Oct 02 18:50:54 compute-0 sudo[32542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:54 compute-0 python3.9[32544]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:50:54 compute-0 sudo[32542]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:55 compute-0 sudo[32694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghzohjbpwmvcrweapqfbpcpjibboyvwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431054.6851408-191-152630478379429/AnsiballZ_mount.py'
Oct 02 18:50:55 compute-0 sudo[32694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:55 compute-0 python3.9[32696]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 02 18:50:55 compute-0 sudo[32694]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:56 compute-0 sudo[32846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrorsodtnlnsyvoosckpsbpzxlumrslg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431056.430399-219-201500464962749/AnsiballZ_file.py'
Oct 02 18:50:56 compute-0 sudo[32846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:57 compute-0 python3.9[32848]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:50:57 compute-0 sudo[32846]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:57 compute-0 sudo[32998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulkjvalotxgqidemunsxtwhjzeysxqpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431057.2411275-227-279847619203648/AnsiballZ_stat.py'
Oct 02 18:50:57 compute-0 sudo[32998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:57 compute-0 python3.9[33000]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:50:57 compute-0 sudo[32998]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:58 compute-0 sudo[33121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsrodtgvncthesgangpcupyoeanmzvnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431057.2411275-227-279847619203648/AnsiballZ_copy.py'
Oct 02 18:50:58 compute-0 sudo[33121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:50:58 compute-0 python3.9[33123]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431057.2411275-227-279847619203648/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=042424fcc498cb89df7270ccf3ebde10882bbe94 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:50:58 compute-0 sudo[33121]: pam_unix(sudo:session): session closed for user root
Oct 02 18:50:59 compute-0 sudo[33274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgwrtedhvyackoykuaglqqxofplgvyik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431059.204933-254-90734375329660/AnsiballZ_getent.py'
Oct 02 18:50:59 compute-0 sudo[33274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:01 compute-0 python3.9[33276]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 02 18:51:01 compute-0 sudo[33274]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:02 compute-0 sudo[33428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pndszxbnjwbqlfuwwjxsdpyfelapzpmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431062.2149-262-62963947442937/AnsiballZ_group.py'
Oct 02 18:51:02 compute-0 sudo[33428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:03 compute-0 python3.9[33430]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 18:51:03 compute-0 groupadd[33431]: group added to /etc/group: name=qemu, GID=107
Oct 02 18:51:03 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 18:51:03 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 18:51:03 compute-0 groupadd[33431]: group added to /etc/gshadow: name=qemu
Oct 02 18:51:03 compute-0 groupadd[33431]: new group: name=qemu, GID=107
Oct 02 18:51:03 compute-0 sudo[33428]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:03 compute-0 sudo[33587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dftzbqjletdgymjchmmhjffcrbvlxtgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431063.2757707-270-122562088954986/AnsiballZ_user.py'
Oct 02 18:51:03 compute-0 sudo[33587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:04 compute-0 python3.9[33589]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 18:51:04 compute-0 useradd[33591]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 18:51:04 compute-0 sudo[33587]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:04 compute-0 sudo[33747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fydjtwarbxvyxtgtqpkyfjbitvsllvsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431064.5614593-278-172199591727899/AnsiballZ_getent.py'
Oct 02 18:51:04 compute-0 sudo[33747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:05 compute-0 python3.9[33749]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 02 18:51:05 compute-0 sudo[33747]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:05 compute-0 sudo[33900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmzmaiteijpuemnuveextvuqyafcirqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431065.3156772-286-175257482195769/AnsiballZ_group.py'
Oct 02 18:51:05 compute-0 sudo[33900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:05 compute-0 python3.9[33902]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 18:51:05 compute-0 groupadd[33903]: group added to /etc/group: name=hugetlbfs, GID=42477
Oct 02 18:51:05 compute-0 groupadd[33903]: group added to /etc/gshadow: name=hugetlbfs
Oct 02 18:51:05 compute-0 groupadd[33903]: new group: name=hugetlbfs, GID=42477
Oct 02 18:51:05 compute-0 sudo[33900]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:06 compute-0 sudo[34058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsafevcawljgpzhvsmlknuautqhjoagy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431066.066506-295-36693285393096/AnsiballZ_file.py'
Oct 02 18:51:06 compute-0 sudo[34058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:06 compute-0 python3.9[34060]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 02 18:51:06 compute-0 sudo[34058]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:07 compute-0 sudo[34210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzvoqunrfhnexafjivkwugvbrkpzykdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431067.0587509-306-189420854806690/AnsiballZ_dnf.py'
Oct 02 18:51:07 compute-0 sudo[34210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:07 compute-0 python3.9[34212]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:51:09 compute-0 sudo[34210]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:09 compute-0 sudo[34363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmedkkkjplfgyiyvzipuomhifptqbpdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431069.3540652-314-79707106125417/AnsiballZ_file.py'
Oct 02 18:51:09 compute-0 sudo[34363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:09 compute-0 python3.9[34365]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:51:09 compute-0 sudo[34363]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:10 compute-0 sudo[34515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlsquwljrcgdekomfcqripmaynilwpdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431070.077962-322-65578035023524/AnsiballZ_stat.py'
Oct 02 18:51:10 compute-0 sudo[34515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:10 compute-0 python3.9[34517]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:51:10 compute-0 sudo[34515]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:10 compute-0 sudo[34638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgwjgzblkodolyacqezqcqdublzmftkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431070.077962-322-65578035023524/AnsiballZ_copy.py'
Oct 02 18:51:10 compute-0 sudo[34638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:11 compute-0 python3.9[34640]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431070.077962-322-65578035023524/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:51:11 compute-0 sudo[34638]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:11 compute-0 sudo[34790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjrlmqfwwbetyiypykeyunompuevgdra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431071.3636143-337-8712596434323/AnsiballZ_systemd.py'
Oct 02 18:51:11 compute-0 sudo[34790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:12 compute-0 python3.9[34792]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:51:12 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 02 18:51:12 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 02 18:51:12 compute-0 kernel: Bridge firewalling registered
Oct 02 18:51:12 compute-0 systemd-modules-load[34796]: Inserted module 'br_netfilter'
Oct 02 18:51:12 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 02 18:51:12 compute-0 sudo[34790]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:13 compute-0 sudo[34950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbkxurobmvuklbduzzdpquvtokwfiqcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431072.6647115-345-206535542466939/AnsiballZ_stat.py'
Oct 02 18:51:13 compute-0 sudo[34950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:13 compute-0 python3.9[34952]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:51:13 compute-0 sudo[34950]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:13 compute-0 sudo[35073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bclarauhiubeeaxzbbuccljygytoykbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431072.6647115-345-206535542466939/AnsiballZ_copy.py'
Oct 02 18:51:13 compute-0 sudo[35073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:13 compute-0 python3.9[35075]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431072.6647115-345-206535542466939/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:51:13 compute-0 sudo[35073]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:14 compute-0 sudo[35225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhpywhavhakaqhockzleensykmdxmnnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431074.199669-363-179033390310887/AnsiballZ_dnf.py'
Oct 02 18:51:14 compute-0 sudo[35225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:14 compute-0 python3.9[35227]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:51:18 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct 02 18:51:18 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct 02 18:51:18 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 18:51:18 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 18:51:18 compute-0 systemd[1]: Reloading.
Oct 02 18:51:18 compute-0 systemd-rc-local-generator[35286]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:51:18 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 18:51:19 compute-0 sudo[35225]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:20 compute-0 python3.9[36396]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:51:20 compute-0 python3.9[37328]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 02 18:51:21 compute-0 python3.9[38090]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:51:22 compute-0 sudo[38904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdlnwgdfxmrpgachvkiehoxsbgxeiskw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431081.8606067-402-231430397734886/AnsiballZ_command.py'
Oct 02 18:51:22 compute-0 sudo[38904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:22 compute-0 python3.9[38930]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:51:22 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 02 18:51:22 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 18:51:22 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 18:51:22 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.314s CPU time.
Oct 02 18:51:22 compute-0 systemd[1]: run-r186a1be76c264e87821e60c76f4e6e01.service: Deactivated successfully.
Oct 02 18:51:22 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 02 18:51:22 compute-0 sudo[38904]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:23 compute-0 sudo[39762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwcnrotpsygxvyfizjpqnfhwbekqmjlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431083.1260133-411-257675813628359/AnsiballZ_systemd.py'
Oct 02 18:51:23 compute-0 sudo[39762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:23 compute-0 python3.9[39764]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:51:23 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 02 18:51:23 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 02 18:51:23 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 02 18:51:23 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 02 18:51:24 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 02 18:51:24 compute-0 sudo[39762]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:24 compute-0 python3.9[39925]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 02 18:51:27 compute-0 sudo[40075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heviozegdcrjrwdbxhcmpsmshmidlemj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431086.8087354-468-156701294458972/AnsiballZ_systemd.py'
Oct 02 18:51:27 compute-0 sudo[40075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:27 compute-0 python3.9[40077]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:51:27 compute-0 systemd[1]: Reloading.
Oct 02 18:51:27 compute-0 systemd-rc-local-generator[40099]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:51:27 compute-0 sudo[40075]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:28 compute-0 sudo[40264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtcwalsjeqdytqpictnytgxinusejcdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431087.8933501-468-99686509774103/AnsiballZ_systemd.py'
Oct 02 18:51:28 compute-0 sudo[40264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:28 compute-0 python3.9[40266]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:51:28 compute-0 systemd[1]: Reloading.
Oct 02 18:51:28 compute-0 systemd-rc-local-generator[40294]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:51:28 compute-0 sudo[40264]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:29 compute-0 sudo[40453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hezcurtgvacobtrzycuxvvahrzmzgzoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431089.10358-484-255920239149710/AnsiballZ_command.py'
Oct 02 18:51:29 compute-0 sudo[40453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:29 compute-0 python3.9[40455]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:51:29 compute-0 sudo[40453]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:30 compute-0 sudo[40606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzltlyyfupyovmtqimitbmebdaqerjvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431089.849299-492-133183227352218/AnsiballZ_command.py'
Oct 02 18:51:30 compute-0 sudo[40606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:30 compute-0 python3.9[40608]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:51:30 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 02 18:51:30 compute-0 sudo[40606]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:31 compute-0 sudo[40759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saaythcixbkqleqdbbrfpbzdfbhlnzlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431090.6634214-500-61216122573987/AnsiballZ_command.py'
Oct 02 18:51:31 compute-0 sudo[40759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:31 compute-0 python3.9[40761]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:51:32 compute-0 sudo[40759]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:33 compute-0 sudo[40921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywbhswcmpewanyafaektxzpypkirrdrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431092.8609643-508-276879972148742/AnsiballZ_command.py'
Oct 02 18:51:33 compute-0 sudo[40921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:33 compute-0 python3.9[40923]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:51:33 compute-0 sudo[40921]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:34 compute-0 sudo[41074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owbrprmjkwyypyycbpidccxbmluxqznq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431093.6866493-516-28647210019551/AnsiballZ_systemd.py'
Oct 02 18:51:34 compute-0 sudo[41074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:34 compute-0 python3.9[41076]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:51:34 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 02 18:51:34 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Oct 02 18:51:34 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Oct 02 18:51:34 compute-0 systemd[1]: Starting Apply Kernel Variables...
Oct 02 18:51:34 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 02 18:51:34 compute-0 systemd[1]: Finished Apply Kernel Variables.
Oct 02 18:51:34 compute-0 sudo[41074]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:34 compute-0 sshd-session[28082]: Connection closed by 192.168.122.30 port 43764
Oct 02 18:51:34 compute-0 sshd-session[28079]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:51:34 compute-0 systemd-logind[798]: Session 9 logged out. Waiting for processes to exit.
Oct 02 18:51:34 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Oct 02 18:51:34 compute-0 systemd[1]: session-9.scope: Consumed 2min 13.483s CPU time.
Oct 02 18:51:34 compute-0 systemd-logind[798]: Removed session 9.
Oct 02 18:51:41 compute-0 sshd-session[41106]: Accepted publickey for zuul from 192.168.122.30 port 49324 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 18:51:41 compute-0 systemd-logind[798]: New session 10 of user zuul.
Oct 02 18:51:41 compute-0 systemd[1]: Started Session 10 of User zuul.
Oct 02 18:51:41 compute-0 sshd-session[41106]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:51:42 compute-0 python3.9[41259]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:51:43 compute-0 python3.9[41413]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:51:44 compute-0 sudo[41567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wawzpyngbeedssymowtthwsyyvhallfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431104.3021355-50-184165150854675/AnsiballZ_command.py'
Oct 02 18:51:44 compute-0 sudo[41567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:44 compute-0 python3.9[41569]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:51:45 compute-0 sudo[41567]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:46 compute-0 python3.9[41720]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:51:46 compute-0 sudo[41874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lloxpuirjwyskspezdruiylmqczmovgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431106.4088461-70-149453366407400/AnsiballZ_setup.py'
Oct 02 18:51:46 compute-0 sudo[41874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:47 compute-0 python3.9[41876]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:51:47 compute-0 sudo[41874]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:47 compute-0 sudo[41958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enrynrjqpzwhlxjkffjxlsauugmphxjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431106.4088461-70-149453366407400/AnsiballZ_dnf.py'
Oct 02 18:51:47 compute-0 sudo[41958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:48 compute-0 python3.9[41960]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:51:49 compute-0 sudo[41958]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:49 compute-0 sudo[42111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cblgebmmzkkghxddsfheohiqtsdzbjzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431109.4781382-82-44046902565874/AnsiballZ_setup.py'
Oct 02 18:51:49 compute-0 sudo[42111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:50 compute-0 python3.9[42113]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:51:50 compute-0 sudo[42111]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:51 compute-0 sudo[42282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzvczsdtbvcgbauzzghblabsdyrchzbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431110.5959287-93-168136137221525/AnsiballZ_file.py'
Oct 02 18:51:51 compute-0 sudo[42282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:51 compute-0 python3.9[42284]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:51:51 compute-0 sudo[42282]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:51 compute-0 sudo[42434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsewqpslqmzrlpfviouthxymilennowp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431111.4991076-101-122039062823192/AnsiballZ_command.py'
Oct 02 18:51:51 compute-0 sudo[42434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:52 compute-0 python3.9[42436]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:51:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2229083939-merged.mount: Deactivated successfully.
Oct 02 18:51:52 compute-0 podman[42437]: 2025-10-02 18:51:52.117976777 +0000 UTC m=+0.077479134 system refresh
Oct 02 18:51:52 compute-0 sudo[42434]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:52 compute-0 sudo[42598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfknshbaskdcsqzkdbbbinaciitcuwpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431112.3169546-109-136255007429577/AnsiballZ_stat.py'
Oct 02 18:51:52 compute-0 sudo[42598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:53 compute-0 python3.9[42600]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:51:53 compute-0 sudo[42598]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:51:53 compute-0 sudo[42721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkkpqsemrndpfgdndfjgidfknymmmazc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431112.3169546-109-136255007429577/AnsiballZ_copy.py'
Oct 02 18:51:53 compute-0 sudo[42721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:53 compute-0 python3.9[42723]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431112.3169546-109-136255007429577/.source.json follow=False _original_basename=podman_network_config.j2 checksum=e0a76b8bc214e98610f7d79c9debe749e8809ef8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:51:53 compute-0 sudo[42721]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:54 compute-0 sudo[42873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sajmpmxcbgygabjxlnoenlqlbhdyykfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431114.1310515-124-166855518506623/AnsiballZ_stat.py'
Oct 02 18:51:54 compute-0 sudo[42873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:54 compute-0 python3.9[42875]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:51:54 compute-0 sudo[42873]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:55 compute-0 sudo[42996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeaoowuxztqrqxtbglequldtkjznxkbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431114.1310515-124-166855518506623/AnsiballZ_copy.py'
Oct 02 18:51:55 compute-0 sudo[42996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:55 compute-0 python3.9[42998]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431114.1310515-124-166855518506623/.source.conf follow=False _original_basename=registries.conf.j2 checksum=f27f86218e398aa50b444b0bf8b9e443f3d2c120 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:51:55 compute-0 sudo[42996]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:56 compute-0 sudo[43148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lylneehfpkfkayhdrdvishzbkbojjait ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431115.5961394-140-186481612631621/AnsiballZ_ini_file.py'
Oct 02 18:51:56 compute-0 sudo[43148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:56 compute-0 python3.9[43150]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:51:56 compute-0 sudo[43148]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:56 compute-0 sudo[43300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fccsuptdcdkjgbfyehlqvzrvabhfkntu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431116.4382443-140-51877753208302/AnsiballZ_ini_file.py'
Oct 02 18:51:56 compute-0 sudo[43300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:57 compute-0 python3.9[43302]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:51:57 compute-0 sudo[43300]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:57 compute-0 sudo[43452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clhwmzovwlddklxlymlxbhaffwixczlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431117.246293-140-98001721986977/AnsiballZ_ini_file.py'
Oct 02 18:51:57 compute-0 sudo[43452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:57 compute-0 python3.9[43454]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:51:57 compute-0 sudo[43452]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:58 compute-0 sudo[43604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mewurfntypeazeeehilbpflpqrqmzdos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431118.0334995-140-261462337917537/AnsiballZ_ini_file.py'
Oct 02 18:51:58 compute-0 sudo[43604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:51:58 compute-0 python3.9[43606]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:51:58 compute-0 sudo[43604]: pam_unix(sudo:session): session closed for user root
Oct 02 18:51:59 compute-0 python3.9[43756]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:52:00 compute-0 sudo[43908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqdzlkcudkaplxnsyyiwajiqbcgihmnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431119.8381608-180-227035501405987/AnsiballZ_dnf.py'
Oct 02 18:52:00 compute-0 sudo[43908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:00 compute-0 python3.9[43910]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 18:52:01 compute-0 sudo[43908]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:02 compute-0 sudo[44061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eejofwlcnegxpnrnqothzxkougcaclud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431121.7605786-188-231185763536991/AnsiballZ_dnf.py'
Oct 02 18:52:02 compute-0 sudo[44061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:02 compute-0 python3.9[44063]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 18:52:04 compute-0 sudo[44061]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:05 compute-0 sudo[44221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obnxzydpsihbxyiouvoqqflxweidwutq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431124.632461-198-46155269476112/AnsiballZ_dnf.py'
Oct 02 18:52:05 compute-0 sudo[44221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:05 compute-0 python3.9[44223]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 18:52:06 compute-0 sudo[44221]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:07 compute-0 sudo[44374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nawqhllgegygajshfiptvqrlndgpeyqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431126.6933048-207-277918014154814/AnsiballZ_dnf.py'
Oct 02 18:52:07 compute-0 sudo[44374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:07 compute-0 python3.9[44376]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 18:52:08 compute-0 sudo[44374]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:09 compute-0 sudo[44527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdwkxlllxssszqgvgmsldcjcilgmkjiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431128.7734494-218-211148581403105/AnsiballZ_dnf.py'
Oct 02 18:52:09 compute-0 sudo[44527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:09 compute-0 python3.9[44529]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 18:52:10 compute-0 sudo[44527]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:11 compute-0 sudo[44683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqckbkvpqeqnzffmnziwacndvyzbeypw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431130.953099-226-229169039301750/AnsiballZ_dnf.py'
Oct 02 18:52:11 compute-0 sudo[44683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:11 compute-0 python3.9[44685]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 18:52:14 compute-0 sudo[44683]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:15 compute-0 sudo[44851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seqzrdixjobrwcutacekngztnvifljxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431134.6045537-235-211913022636786/AnsiballZ_dnf.py'
Oct 02 18:52:15 compute-0 sudo[44851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:15 compute-0 python3.9[44853]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 18:52:16 compute-0 sudo[44851]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:17 compute-0 sudo[45004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvpkvhofpkpjqmnndtqromxtdeoftawf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431136.7016678-244-154539592852354/AnsiballZ_dnf.py'
Oct 02 18:52:17 compute-0 sudo[45004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:17 compute-0 python3.9[45006]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 18:52:30 compute-0 sudo[45004]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:30 compute-0 sudo[45342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knvazwnothehybfnxlrfskspfzzbehvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431150.5546315-255-54057340368105/AnsiballZ_file.py'
Oct 02 18:52:30 compute-0 sudo[45342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:31 compute-0 python3.9[45344]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:31 compute-0 sudo[45342]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:31 compute-0 sudo[45517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjmkvfilaocirhagqevertxytcrmdwar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431151.242694-263-131197987031310/AnsiballZ_stat.py'
Oct 02 18:52:31 compute-0 sudo[45517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:31 compute-0 python3.9[45519]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:52:31 compute-0 sudo[45517]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:32 compute-0 sudo[45640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrnlczofuxealgezolnlzruuhrwlfgtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431151.242694-263-131197987031310/AnsiballZ_copy.py'
Oct 02 18:52:32 compute-0 sudo[45640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:32 compute-0 python3.9[45642]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759431151.242694-263-131197987031310/.source.json _original_basename=.cn3vcd_i follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:52:32 compute-0 sudo[45640]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:33 compute-0 sudo[45792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phygijqstnpwbtcnjgxojsfizdjyocaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431152.7323518-281-146095118300670/AnsiballZ_podman_image.py'
Oct 02 18:52:33 compute-0 sudo[45792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:33 compute-0 python3.9[45794]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 02 18:52:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3593398349-lower\x2dmapped.mount: Deactivated successfully.
Oct 02 18:52:40 compute-0 podman[45806]: 2025-10-02 18:52:40.097823946 +0000 UTC m=+6.519741847 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 18:52:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:40 compute-0 sudo[45792]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:40 compute-0 sudo[46102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgplihhzwckvtfbsgimdtjifeohexfeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431160.623092-290-127159654864620/AnsiballZ_podman_image.py'
Oct 02 18:52:40 compute-0 sudo[46102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:41 compute-0 python3.9[46104]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 02 18:52:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:43 compute-0 podman[46115]: 2025-10-02 18:52:43.556010637 +0000 UTC m=+2.271503552 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 18:52:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:43 compute-0 sudo[46102]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:44 compute-0 sudo[46367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cayxcqvjcvbjgqzatowgtkjnbwchhyuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431164.1818867-301-147266274887326/AnsiballZ_podman_image.py'
Oct 02 18:52:44 compute-0 sudo[46367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:44 compute-0 python3.9[46369]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 02 18:52:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:52 compute-0 podman[46382]: 2025-10-02 18:52:52.654609279 +0000 UTC m=+7.843880350 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 18:52:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:52 compute-0 sudo[46367]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:53 compute-0 sudo[46655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abuulbnxrutktapjjwgvqymhvznzpzli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431173.2968414-311-135656454196478/AnsiballZ_podman_image.py'
Oct 02 18:52:53 compute-0 sudo[46655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:53 compute-0 python3.9[46657]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 02 18:52:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:55 compute-0 podman[46669]: 2025-10-02 18:52:55.116540265 +0000 UTC m=+1.288082918 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 18:52:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:52:55 compute-0 sudo[46655]: pam_unix(sudo:session): session closed for user root
Oct 02 18:52:55 compute-0 sudo[46902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajwnsbmmgfmeqzqlgacxethjapeobyll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431175.6106725-320-259502771151948/AnsiballZ_podman_image.py'
Oct 02 18:52:55 compute-0 sudo[46902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:52:56 compute-0 python3.9[46904]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 02 18:52:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:05 compute-0 podman[46916]: 2025-10-02 18:53:05.464314811 +0000 UTC m=+9.222173065 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 18:53:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:05 compute-0 sudo[46902]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:06 compute-0 sudo[47170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtrbxhmqgytbwqjlghustcfltokczrpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431186.05754-331-84462830106184/AnsiballZ_podman_image.py'
Oct 02 18:53:06 compute-0 sudo[47170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:06 compute-0 python3.9[47172]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 02 18:53:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:22 compute-0 podman[47184]: 2025-10-02 18:53:22.02681981 +0000 UTC m=+15.386245076 image pull af55c482fa6ac3c7068a40d60290d5ada8b2ec948be38389742c3fe61801742f quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Oct 02 18:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:22 compute-0 sudo[47170]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:22 compute-0 sudo[47498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqirqzlitecfrduepmbbctyjodzrhivu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431202.3906622-331-146969881496641/AnsiballZ_podman_image.py'
Oct 02 18:53:22 compute-0 sudo[47498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:22 compute-0 python3.9[47500]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 02 18:53:24 compute-0 podman[47512]: 2025-10-02 18:53:24.632927917 +0000 UTC m=+1.630967662 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Oct 02 18:53:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:24 compute-0 sudo[47498]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:25 compute-0 sudo[47790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arthyzhfjukxopwaqxfgvfuzdveqnndf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431205.100676-347-231677382940109/AnsiballZ_podman_image.py'
Oct 02 18:53:25 compute-0 sudo[47790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:25 compute-0 python3.9[47792]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 02 18:53:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:28 compute-0 podman[47804]: 2025-10-02 18:53:28.625424024 +0000 UTC m=+2.945888669 image pull 4e3fcb5b1fba62258ff06f167ae06a1ec1b5619d7c6c0d986039bf8e54f8eb69 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Oct 02 18:53:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:28 compute-0 sudo[47790]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:29 compute-0 sudo[48059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdtttkwaltewffmdedgzmaygmkuqsphl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431209.068693-347-168443042980548/AnsiballZ_podman_image.py'
Oct 02 18:53:29 compute-0 sudo[48059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:29 compute-0 python3.9[48061]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 02 18:53:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:36 compute-0 podman[48072]: 2025-10-02 18:53:36.644164826 +0000 UTC m=+6.841811881 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Oct 02 18:53:36 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:36 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:36 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:53:36 compute-0 sudo[48059]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:37 compute-0 sshd-session[41109]: Connection closed by 192.168.122.30 port 49324
Oct 02 18:53:37 compute-0 sshd-session[41106]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:53:37 compute-0 systemd-logind[798]: Session 10 logged out. Waiting for processes to exit.
Oct 02 18:53:37 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Oct 02 18:53:37 compute-0 systemd[1]: session-10.scope: Consumed 2min 29.517s CPU time.
Oct 02 18:53:37 compute-0 systemd-logind[798]: Removed session 10.
Oct 02 18:53:43 compute-0 sshd-session[48322]: Accepted publickey for zuul from 192.168.122.30 port 57958 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 18:53:43 compute-0 systemd-logind[798]: New session 11 of user zuul.
Oct 02 18:53:43 compute-0 systemd[1]: Started Session 11 of User zuul.
Oct 02 18:53:43 compute-0 sshd-session[48322]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:53:44 compute-0 python3.9[48475]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:53:45 compute-0 sudo[48629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkmbjjmymfvcpvmdlexqaepcevwfmdcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431224.9667249-36-112487886382624/AnsiballZ_getent.py'
Oct 02 18:53:45 compute-0 sudo[48629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:45 compute-0 python3.9[48631]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 02 18:53:45 compute-0 sudo[48629]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:46 compute-0 sudo[48782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eitjweytzllrhexyhfnzpvftrkxbfwoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431225.8395948-44-109328475068337/AnsiballZ_group.py'
Oct 02 18:53:46 compute-0 sudo[48782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:46 compute-0 python3.9[48784]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 18:53:46 compute-0 groupadd[48785]: group added to /etc/group: name=openvswitch, GID=42476
Oct 02 18:53:46 compute-0 groupadd[48785]: group added to /etc/gshadow: name=openvswitch
Oct 02 18:53:46 compute-0 groupadd[48785]: new group: name=openvswitch, GID=42476
Oct 02 18:53:46 compute-0 sudo[48782]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:47 compute-0 sudo[48940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssxsnsisgjmakoghllinnatagrskttmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431226.75273-52-5154247702014/AnsiballZ_user.py'
Oct 02 18:53:47 compute-0 sudo[48940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:47 compute-0 python3.9[48942]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 18:53:47 compute-0 useradd[48944]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 18:53:47 compute-0 useradd[48944]: add 'openvswitch' to group 'hugetlbfs'
Oct 02 18:53:47 compute-0 useradd[48944]: add 'openvswitch' to shadow group 'hugetlbfs'
Oct 02 18:53:47 compute-0 sudo[48940]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:48 compute-0 sudo[49100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xesmymqsszarnoarkwcupfotvodyufci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431228.113179-62-251845304408092/AnsiballZ_setup.py'
Oct 02 18:53:48 compute-0 sudo[49100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:48 compute-0 python3.9[49102]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:53:48 compute-0 sudo[49100]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:49 compute-0 sudo[49184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-algiznztnjyhdjwzdtvmrtyifqywrfzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431228.113179-62-251845304408092/AnsiballZ_dnf.py'
Oct 02 18:53:49 compute-0 sudo[49184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:49 compute-0 python3.9[49186]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 18:53:51 compute-0 sudo[49184]: pam_unix(sudo:session): session closed for user root
Oct 02 18:53:51 compute-0 sudo[49345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhwiqxorhptpgimrnzysybblwiaxmjox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431231.3402386-76-56699782518843/AnsiballZ_dnf.py'
Oct 02 18:53:51 compute-0 sudo[49345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:53:51 compute-0 python3.9[49347]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:54:04 compute-0 kernel: SELinux:  Converting 2725 SID table entries...
Oct 02 18:54:04 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:54:04 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 18:54:04 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:54:04 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:54:04 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:54:04 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:54:04 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:54:04 compute-0 groupadd[49370]: group added to /etc/group: name=unbound, GID=993
Oct 02 18:54:04 compute-0 groupadd[49370]: group added to /etc/gshadow: name=unbound
Oct 02 18:54:04 compute-0 groupadd[49370]: new group: name=unbound, GID=993
Oct 02 18:54:04 compute-0 useradd[49377]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Oct 02 18:54:04 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 02 18:54:04 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 02 18:54:05 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 18:54:05 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 18:54:05 compute-0 systemd[1]: Reloading.
Oct 02 18:54:06 compute-0 systemd-rc-local-generator[49869]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:54:06 compute-0 systemd-sysv-generator[49875]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:54:06 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 18:54:06 compute-0 sudo[49345]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:06 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 18:54:06 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 18:54:06 compute-0 systemd[1]: run-r5d621fc145ce4923a78e45d940ed5fe1.service: Deactivated successfully.
Oct 02 18:54:07 compute-0 sudo[50446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwswcktftamjpmpqwpqcskcudltpryfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431246.89312-84-192031722215883/AnsiballZ_systemd.py'
Oct 02 18:54:07 compute-0 sudo[50446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:07 compute-0 python3.9[50448]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 18:54:08 compute-0 systemd[1]: Reloading.
Oct 02 18:54:08 compute-0 systemd-sysv-generator[50483]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:54:08 compute-0 systemd-rc-local-generator[50479]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:54:08 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Oct 02 18:54:08 compute-0 chown[50490]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 02 18:54:08 compute-0 ovs-ctl[50495]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 02 18:54:08 compute-0 ovs-ctl[50495]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 02 18:54:08 compute-0 ovs-ctl[50495]: Starting ovsdb-server [  OK  ]
Oct 02 18:54:08 compute-0 ovs-vsctl[50544]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 02 18:54:08 compute-0 ovs-vsctl[50564]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"bbab9e90-4b9d-4a75-81b6-ad2c1de412c6\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 02 18:54:08 compute-0 ovs-ctl[50495]: Configuring Open vSwitch system IDs [  OK  ]
Oct 02 18:54:08 compute-0 ovs-ctl[50495]: Enabling remote OVSDB managers [  OK  ]
Oct 02 18:54:08 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Oct 02 18:54:08 compute-0 ovs-vsctl[50569]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 02 18:54:08 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 02 18:54:08 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 02 18:54:08 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 02 18:54:08 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Oct 02 18:54:08 compute-0 ovs-ctl[50614]: Inserting openvswitch module [  OK  ]
Oct 02 18:54:09 compute-0 ovs-ctl[50583]: Starting ovs-vswitchd [  OK  ]
Oct 02 18:54:09 compute-0 ovs-vsctl[50635]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 02 18:54:09 compute-0 ovs-ctl[50583]: Enabling remote OVSDB managers [  OK  ]
Oct 02 18:54:09 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 02 18:54:09 compute-0 systemd[1]: Starting Open vSwitch...
Oct 02 18:54:09 compute-0 systemd[1]: Finished Open vSwitch.
Oct 02 18:54:09 compute-0 sudo[50446]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:10 compute-0 python3.9[50787]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:54:10 compute-0 sudo[50937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfwiybydnvlawkflweqkuavhtuuzagzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431250.2516656-102-276104739355267/AnsiballZ_sefcontext.py'
Oct 02 18:54:10 compute-0 sudo[50937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:11 compute-0 python3.9[50939]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 02 18:54:12 compute-0 kernel: SELinux:  Converting 2739 SID table entries...
Oct 02 18:54:12 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 18:54:12 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 18:54:12 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 18:54:12 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 18:54:12 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 18:54:12 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 18:54:12 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 18:54:12 compute-0 sudo[50937]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:13 compute-0 python3.9[51094]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:54:13 compute-0 sudo[51250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsycpcupxmrflhtqetxcseymrfyawmgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431253.5193129-120-40941677156944/AnsiballZ_dnf.py'
Oct 02 18:54:13 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 02 18:54:13 compute-0 sudo[51250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:14 compute-0 python3.9[51252]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:54:15 compute-0 sudo[51250]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:15 compute-0 sudo[51403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mitcukvxivofyiotagjfkofgmejnxfrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431255.4877896-128-132367204551076/AnsiballZ_command.py'
Oct 02 18:54:15 compute-0 sudo[51403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:16 compute-0 python3.9[51405]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:54:16 compute-0 sudo[51403]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:17 compute-0 sudo[51690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgyzhdnqgoyrchltgkhfgmjxfwuiofbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431257.137887-136-127386521287354/AnsiballZ_file.py'
Oct 02 18:54:17 compute-0 sudo[51690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:17 compute-0 python3.9[51692]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 18:54:17 compute-0 sudo[51690]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:18 compute-0 python3.9[51842]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:54:19 compute-0 sudo[51994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybyessmckbqkfmabqkfnjnfzwgeyoyxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431258.9397166-152-238079034597470/AnsiballZ_dnf.py'
Oct 02 18:54:19 compute-0 sudo[51994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:19 compute-0 python3.9[51996]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:54:21 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 18:54:21 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 18:54:21 compute-0 systemd[1]: Reloading.
Oct 02 18:54:21 compute-0 systemd-sysv-generator[52040]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:54:21 compute-0 systemd-rc-local-generator[52033]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:54:21 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 18:54:21 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 18:54:21 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 18:54:21 compute-0 systemd[1]: run-rfa1a15573693491abc1f2d6c5958acc7.service: Deactivated successfully.
Oct 02 18:54:21 compute-0 sudo[51994]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:22 compute-0 sudo[52311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yikteyitoymtyezgmwmpxpakfcxsfnyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431262.155745-160-215451836933914/AnsiballZ_systemd.py'
Oct 02 18:54:22 compute-0 sudo[52311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:22 compute-0 python3.9[52313]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:54:22 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 02 18:54:22 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Oct 02 18:54:22 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Oct 02 18:54:22 compute-0 systemd[1]: Stopping Network Manager...
Oct 02 18:54:22 compute-0 NetworkManager[3949]: <info>  [1759431262.7779] caught SIGTERM, shutting down normally.
Oct 02 18:54:22 compute-0 NetworkManager[3949]: <info>  [1759431262.7798] dhcp4 (eth0): canceled DHCP transaction
Oct 02 18:54:22 compute-0 NetworkManager[3949]: <info>  [1759431262.7798] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:54:22 compute-0 NetworkManager[3949]: <info>  [1759431262.7798] dhcp4 (eth0): state changed no lease
Oct 02 18:54:22 compute-0 NetworkManager[3949]: <info>  [1759431262.7801] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 18:54:22 compute-0 NetworkManager[3949]: <info>  [1759431262.7888] exiting (success)
Oct 02 18:54:22 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 18:54:22 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 18:54:22 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 02 18:54:22 compute-0 systemd[1]: Stopped Network Manager.
Oct 02 18:54:22 compute-0 systemd[1]: NetworkManager.service: Consumed 15.489s CPU time, 4.1M memory peak, read 0B from disk, written 19.5K to disk.
Oct 02 18:54:22 compute-0 systemd[1]: Starting Network Manager...
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.8631] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:cafe0c2a-2d4b-4517-8a8b-b22a7ae0a086)
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.8634] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.8685] manager[0x55c6f13f1090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 02 18:54:22 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 18:54:22 compute-0 systemd[1]: Started Hostname Service.
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9866] hostname: hostname: using hostnamed
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9866] hostname: static hostname changed from (none) to "compute-0"
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9871] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9876] manager[0x55c6f13f1090]: rfkill: Wi-Fi hardware radio set enabled
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9877] manager[0x55c6f13f1090]: rfkill: WWAN hardware radio set enabled
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9898] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9906] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9907] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9907] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9908] manager: Networking is enabled by state file
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9910] settings: Loaded settings plugin: keyfile (internal)
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9914] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9937] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9948] dhcp: init: Using DHCP client 'internal'
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9951] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9956] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9962] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9968] device (lo): Activation: starting connection 'lo' (aeebfd8a-b15e-4738-a7c9-24998c83f095)
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9974] device (eth0): carrier: link connected
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9978] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9983] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9984] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9991] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:54:22 compute-0 NetworkManager[52324]: <info>  [1759431262.9997] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0001] device (eth1): carrier: link connected
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0005] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0012] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (ddf31ed9-79b0-5b7b-a7a3-0e250874a52d) (indicated)
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0013] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0018] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0025] device (eth1): Activation: starting connection 'ci-private-network' (ddf31ed9-79b0-5b7b-a7a3-0e250874a52d)
Oct 02 18:54:23 compute-0 systemd[1]: Started Network Manager.
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0031] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0042] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0045] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0046] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0048] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0051] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0053] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0056] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0058] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0066] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0068] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0088] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0120] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0134] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0138] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0147] device (lo): Activation: successful, device activated.
Oct 02 18:54:23 compute-0 systemd[1]: Starting Network Manager Wait Online...
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0158] dhcp4 (eth0): state changed new lease, address=38.102.83.147
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0173] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0250] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0255] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0256] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0258] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0260] device (eth1): Activation: successful, device activated.
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0289] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0291] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0294] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0297] device (eth0): Activation: successful, device activated.
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0302] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 02 18:54:23 compute-0 sudo[52311]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:23 compute-0 NetworkManager[52324]: <info>  [1759431263.0343] manager: startup complete
Oct 02 18:54:23 compute-0 systemd[1]: Finished Network Manager Wait Online.
Oct 02 18:54:23 compute-0 sudo[52537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfzwesyaxqlujhqivsbssgbtvwrwrlzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431263.2187462-168-205775456016102/AnsiballZ_dnf.py'
Oct 02 18:54:23 compute-0 sudo[52537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:23 compute-0 python3.9[52539]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:54:28 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 18:54:28 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 18:54:28 compute-0 systemd[1]: Reloading.
Oct 02 18:54:28 compute-0 systemd-sysv-generator[52597]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:54:28 compute-0 systemd-rc-local-generator[52592]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:54:28 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 18:54:29 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 18:54:29 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 18:54:29 compute-0 systemd[1]: run-rba2b78ce0c7e41a4a2e7c3353c5ad6d2.service: Deactivated successfully.
Oct 02 18:54:29 compute-0 sudo[52537]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:30 compute-0 sudo[52999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogngurfvufolfryvycsezcyvlnivapmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431269.7938952-180-87193480364784/AnsiballZ_stat.py'
Oct 02 18:54:30 compute-0 sudo[52999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:30 compute-0 python3.9[53001]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:54:30 compute-0 sudo[52999]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:31 compute-0 sudo[53151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaypdlbxderpvfogyavkchdgoztzkbkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431270.622105-189-11117248803598/AnsiballZ_ini_file.py'
Oct 02 18:54:31 compute-0 sudo[53151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:31 compute-0 python3.9[53153]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:31 compute-0 sudo[53151]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:31 compute-0 sudo[53305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjjkixwyjnyngfcbfmjjyitzckrrwzeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431271.5498276-199-39768759804869/AnsiballZ_ini_file.py'
Oct 02 18:54:31 compute-0 sudo[53305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:32 compute-0 python3.9[53307]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:32 compute-0 sudo[53305]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:32 compute-0 sudo[53457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbwofntywgsnprekpoilgogqzjrtgtlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431272.2076924-199-162852163539181/AnsiballZ_ini_file.py'
Oct 02 18:54:32 compute-0 sudo[53457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:32 compute-0 python3.9[53459]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:32 compute-0 sudo[53457]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:33 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 18:54:33 compute-0 sudo[53609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uomqnmfpefsihopaebjaxbauiktppudt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431273.026152-214-39071721265032/AnsiballZ_ini_file.py'
Oct 02 18:54:33 compute-0 sudo[53609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:33 compute-0 python3.9[53611]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:33 compute-0 sudo[53609]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:34 compute-0 sudo[53761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbeattrliavhcvvockyboxaqrzdyyvhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431273.8713615-214-212313375011799/AnsiballZ_ini_file.py'
Oct 02 18:54:34 compute-0 sudo[53761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:34 compute-0 python3.9[53763]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:34 compute-0 sudo[53761]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:35 compute-0 sudo[53913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evuyqudsuzusnfnwtcehzqzuzmkkamso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431274.664742-229-257230664249722/AnsiballZ_stat.py'
Oct 02 18:54:35 compute-0 sudo[53913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:35 compute-0 python3.9[53915]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:35 compute-0 sudo[53913]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:35 compute-0 sudo[54036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krcxorkiucmouqgleianhhyemcgvvclu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431274.664742-229-257230664249722/AnsiballZ_copy.py'
Oct 02 18:54:35 compute-0 sudo[54036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:36 compute-0 python3.9[54038]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431274.664742-229-257230664249722/.source _original_basename=.6mumgn6m follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:36 compute-0 sudo[54036]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:36 compute-0 sudo[54188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxoygbbrmjvhxdxwrtopkkqycnbhkkce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431276.30655-244-233184907686976/AnsiballZ_file.py'
Oct 02 18:54:36 compute-0 sudo[54188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:36 compute-0 python3.9[54190]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:36 compute-0 sudo[54188]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:37 compute-0 sudo[54340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwevdvgmzhhbrcubdjgqixrsbvehbvtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431277.060892-252-272862326109353/AnsiballZ_edpm_os_net_config_mappings.py'
Oct 02 18:54:37 compute-0 sudo[54340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:37 compute-0 python3.9[54342]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 02 18:54:37 compute-0 sudo[54340]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:38 compute-0 sudo[54492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrnunlmwhmfbjpzqvhzzjxijyxbruemx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431278.0205479-261-29261066882959/AnsiballZ_file.py'
Oct 02 18:54:38 compute-0 sudo[54492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:38 compute-0 python3.9[54494]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:38 compute-0 sudo[54492]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:39 compute-0 sudo[54644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgomncqihatcnclekkjxbwpowcqpkrnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431278.8792088-271-17528263911243/AnsiballZ_stat.py'
Oct 02 18:54:39 compute-0 sudo[54644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:39 compute-0 sudo[54644]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:39 compute-0 sudo[54767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfznyeovjvfwbsmwbvjrmchowxfffvvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431278.8792088-271-17528263911243/AnsiballZ_copy.py'
Oct 02 18:54:39 compute-0 sudo[54767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:40 compute-0 sudo[54767]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:40 compute-0 sudo[54919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eawuskhghxebxicdkupogvukdhyvlhlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431280.1913133-286-232001224440724/AnsiballZ_slurp.py'
Oct 02 18:54:40 compute-0 sudo[54919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:40 compute-0 python3.9[54921]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 02 18:54:40 compute-0 sudo[54919]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:42 compute-0 sudo[55094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cczviegabjnjwjbdcezdmgxyvpfrhlvb ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431281.1782296-295-254638714100971/async_wrapper.py j284436177951 300 /home/zuul/.ansible/tmp/ansible-tmp-1759431281.1782296-295-254638714100971/AnsiballZ_edpm_os_net_config.py _'
Oct 02 18:54:42 compute-0 sudo[55094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:42 compute-0 ansible-async_wrapper.py[55096]: Invoked with j284436177951 300 /home/zuul/.ansible/tmp/ansible-tmp-1759431281.1782296-295-254638714100971/AnsiballZ_edpm_os_net_config.py _
Oct 02 18:54:42 compute-0 ansible-async_wrapper.py[55099]: Starting module and watcher
Oct 02 18:54:42 compute-0 ansible-async_wrapper.py[55099]: Start watching 55100 (300)
Oct 02 18:54:42 compute-0 ansible-async_wrapper.py[55100]: Start module (55100)
Oct 02 18:54:42 compute-0 ansible-async_wrapper.py[55096]: Return async_wrapper task started.
Oct 02 18:54:42 compute-0 sudo[55094]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:42 compute-0 python3.9[55101]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 02 18:54:43 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 02 18:54:43 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 02 18:54:43 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 02 18:54:43 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 02 18:54:43 compute-0 kernel: cfg80211: failed to load regulatory.db
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.1470] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.1483] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.1944] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.1945] audit: op="connection-add" uuid="42777a6f-f435-4b84-bae5-7c0849651e96" name="br-ex-br" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.1956] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.1957] audit: op="connection-add" uuid="b70675c7-a67b-40fc-ac3f-ea835d19cd49" name="br-ex-port" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.1966] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.1967] audit: op="connection-add" uuid="e950fa76-9864-4804-a03c-24ffcb43fcf3" name="eth1-port" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.1976] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.1977] audit: op="connection-add" uuid="77291f41-2df9-4dbd-b2c5-5900ef8b4440" name="vlan20-port" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.1986] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.1987] audit: op="connection-add" uuid="25ae5e1a-68ff-45ef-9a03-c5830f6a3dc6" name="vlan21-port" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.1996] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.1997] audit: op="connection-add" uuid="a7c221ad-edf5-465f-98fd-681ba1d9fc54" name="vlan22-port" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2012] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2025] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2026] audit: op="connection-add" uuid="124247dd-3060-4313-a836-dfa6f2e3d382" name="br-ex-if" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2078] audit: op="connection-update" uuid="ddf31ed9-79b0-5b7b-a7a3-0e250874a52d" name="ci-private-network" args="connection.slave-type,connection.master,connection.port-type,connection.controller,connection.timestamp,ovs-interface.type,ovs-external-ids.data,ipv4.routes,ipv4.routing-rules,ipv4.dns,ipv4.never-default,ipv4.method,ipv4.addresses,ipv6.routes,ipv6.addr-gen-mode,ipv6.dns,ipv6.routing-rules,ipv6.method,ipv6.addresses" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2093] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2094] audit: op="connection-add" uuid="c03cf9bb-0594-4302-9c1b-69e75e5354c3" name="vlan20-if" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2107] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2108] audit: op="connection-add" uuid="d4f506db-3a21-415b-83dc-da2fabfa1d24" name="vlan21-if" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2120] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2121] audit: op="connection-add" uuid="49c45ed1-2c3e-4994-808b-689ac9aefb18" name="vlan22-if" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2130] audit: op="connection-delete" uuid="d4b15060-b769-328b-b6bc-44454af900b8" name="Wired connection 1" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2138] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2146] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2149] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (42777a6f-f435-4b84-bae5-7c0849651e96)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2149] audit: op="connection-activate" uuid="42777a6f-f435-4b84-bae5-7c0849651e96" name="br-ex-br" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2151] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2157] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2160] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (b70675c7-a67b-40fc-ac3f-ea835d19cd49)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2161] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2166] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2169] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (e950fa76-9864-4804-a03c-24ffcb43fcf3)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2170] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2176] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2178] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (77291f41-2df9-4dbd-b2c5-5900ef8b4440)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2180] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2185] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2188] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (25ae5e1a-68ff-45ef-9a03-c5830f6a3dc6)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2189] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2194] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2197] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (a7c221ad-edf5-465f-98fd-681ba1d9fc54)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2197] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2199] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2201] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2205] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2208] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2211] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (124247dd-3060-4313-a836-dfa6f2e3d382)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2212] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2214] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2215] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2216] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2217] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2224] device (eth1): disconnecting for new activation request.
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2224] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2227] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2228] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2229] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2231] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2234] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2236] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (c03cf9bb-0594-4302-9c1b-69e75e5354c3)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2236] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2238] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2239] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2240] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2242] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2244] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2249] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (d4f506db-3a21-415b-83dc-da2fabfa1d24)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2249] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2251] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2252] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2253] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2254] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2257] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2260] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (49c45ed1-2c3e-4994-808b-689ac9aefb18)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2260] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2262] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2263] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2264] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2265] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2275] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2276] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2278] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2279] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2284] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2287] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2289] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2291] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2292] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2296] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2298] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2300] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2302] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2305] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2307] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 kernel: ovs-system: entered promiscuous mode
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2310] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2312] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2315] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2319] dhcp4 (eth0): canceled DHCP transaction
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2319] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2319] dhcp4 (eth0): state changed no lease
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2320] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2337] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2339] audit: op="device-reapply" interface="eth1" ifindex=3 pid=55102 uid=0 result="fail" reason="Device is not activated"
Oct 02 18:54:44 compute-0 systemd-udevd[55106]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 18:54:44 compute-0 kernel: Timeout policy base is empty
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2371] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2380] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2389] device (eth1): disconnecting for new activation request.
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2392] audit: op="connection-activate" uuid="ddf31ed9-79b0-5b7b-a7a3-0e250874a52d" name="ci-private-network" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2394] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2400] dhcp4 (eth0): state changed new lease, address=38.102.83.147
Oct 02 18:54:44 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2453] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=55102 uid=0 result="success"
Oct 02 18:54:44 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2598] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 02 18:54:44 compute-0 kernel: br-ex: entered promiscuous mode
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2788] device (eth1): Activation: starting connection 'ci-private-network' (ddf31ed9-79b0-5b7b-a7a3-0e250874a52d)
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2799] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2801] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 kernel: vlan22: entered promiscuous mode
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2816] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2816] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2817] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2818] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2819] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2820] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2822] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2827] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2834] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2837] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2840] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2843] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2845] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2847] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2850] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2853] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2856] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2859] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2861] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2864] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2867] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 02 18:54:44 compute-0 kernel: vlan20: entered promiscuous mode
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2873] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 systemd-udevd[55108]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2890] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2902] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2903] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2908] device (eth1): Activation: successful, device activated.
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2913] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2914] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2919] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2934] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 02 18:54:44 compute-0 kernel: vlan21: entered promiscuous mode
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.2956] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.3005] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.3005] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.3008] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.3017] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.3034] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.3046] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.3061] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.3070] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.3072] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.3078] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.3087] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.3088] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 18:54:44 compute-0 NetworkManager[52324]: <info>  [1759431284.3095] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 18:54:45 compute-0 NetworkManager[52324]: <info>  [1759431285.4335] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=55102 uid=0 result="success"
Oct 02 18:54:45 compute-0 NetworkManager[52324]: <info>  [1759431285.6352] checkpoint[0x55c6f13c6950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 02 18:54:45 compute-0 NetworkManager[52324]: <info>  [1759431285.6354] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=55102 uid=0 result="success"
Oct 02 18:54:46 compute-0 NetworkManager[52324]: <info>  [1759431286.0924] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=55102 uid=0 result="success"
Oct 02 18:54:46 compute-0 NetworkManager[52324]: <info>  [1759431286.0942] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=55102 uid=0 result="success"
Oct 02 18:54:46 compute-0 sudo[55434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdgctkztufodypkrqotigxobiavncgpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431285.3958714-295-217068185333647/AnsiballZ_async_status.py'
Oct 02 18:54:46 compute-0 sudo[55434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:46 compute-0 python3.9[55436]: ansible-ansible.legacy.async_status Invoked with jid=j284436177951.55096 mode=status _async_dir=/root/.ansible_async
Oct 02 18:54:46 compute-0 sudo[55434]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:46 compute-0 NetworkManager[52324]: <info>  [1759431286.3624] audit: op="networking-control" arg="global-dns-configuration" pid=55102 uid=0 result="success"
Oct 02 18:54:46 compute-0 NetworkManager[52324]: <info>  [1759431286.3765] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 02 18:54:46 compute-0 NetworkManager[52324]: <info>  [1759431286.3810] audit: op="networking-control" arg="global-dns-configuration" pid=55102 uid=0 result="success"
Oct 02 18:54:46 compute-0 NetworkManager[52324]: <info>  [1759431286.3832] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=55102 uid=0 result="success"
Oct 02 18:54:46 compute-0 NetworkManager[52324]: <info>  [1759431286.5002] checkpoint[0x55c6f13c6a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 02 18:54:46 compute-0 NetworkManager[52324]: <info>  [1759431286.5005] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=55102 uid=0 result="success"
Oct 02 18:54:46 compute-0 ansible-async_wrapper.py[55100]: Module complete (55100)
Oct 02 18:54:47 compute-0 ansible-async_wrapper.py[55099]: Done in kid B.
Oct 02 18:54:49 compute-0 sudo[55539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdtqlmvvshhpovitsuourhsztinqwmwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431285.3958714-295-217068185333647/AnsiballZ_async_status.py'
Oct 02 18:54:49 compute-0 sudo[55539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:49 compute-0 python3.9[55541]: ansible-ansible.legacy.async_status Invoked with jid=j284436177951.55096 mode=status _async_dir=/root/.ansible_async
Oct 02 18:54:49 compute-0 sudo[55539]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:50 compute-0 sudo[55638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddrkqogrunfhwcfvtabpgrwytvltgelq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431285.3958714-295-217068185333647/AnsiballZ_async_status.py'
Oct 02 18:54:50 compute-0 sudo[55638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:50 compute-0 python3.9[55640]: ansible-ansible.legacy.async_status Invoked with jid=j284436177951.55096 mode=cleanup _async_dir=/root/.ansible_async
Oct 02 18:54:50 compute-0 sudo[55638]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:51 compute-0 sudo[55790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoperduznngzghbtnexbmllzxtptgvbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431290.6736536-322-20412504491887/AnsiballZ_stat.py'
Oct 02 18:54:51 compute-0 sudo[55790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:51 compute-0 python3.9[55792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:51 compute-0 sudo[55790]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:51 compute-0 sudo[55913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijgwcrkbwhbhatuwssxagbiwyfeitpbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431290.6736536-322-20412504491887/AnsiballZ_copy.py'
Oct 02 18:54:51 compute-0 sudo[55913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:51 compute-0 python3.9[55915]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431290.6736536-322-20412504491887/.source.returncode _original_basename=.pgel_hdb follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:51 compute-0 sudo[55913]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:52 compute-0 sudo[56066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpjscftjibwlnivqwlrmflmxabfqeafs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431292.5158339-338-103272670182937/AnsiballZ_stat.py'
Oct 02 18:54:52 compute-0 sudo[56066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:53 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 18:54:53 compute-0 python3.9[56068]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:54:53 compute-0 sudo[56066]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:53 compute-0 sudo[56191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggbztbpzfcvsrtzwwomgnfomkvtnvhai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431292.5158339-338-103272670182937/AnsiballZ_copy.py'
Oct 02 18:54:53 compute-0 sudo[56191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:53 compute-0 python3.9[56193]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431292.5158339-338-103272670182937/.source.cfg _original_basename=.3d8d4ubx follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:54:53 compute-0 sudo[56191]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:54 compute-0 sudo[56343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkmqwuawmhofpoidzbseymvonhxvfkau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431293.9224982-353-114992213035463/AnsiballZ_systemd.py'
Oct 02 18:54:54 compute-0 sudo[56343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:54:54 compute-0 python3.9[56345]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:54:54 compute-0 systemd[1]: Reloading Network Manager...
Oct 02 18:54:54 compute-0 NetworkManager[52324]: <info>  [1759431294.7010] audit: op="reload" arg="0" pid=56349 uid=0 result="success"
Oct 02 18:54:54 compute-0 NetworkManager[52324]: <info>  [1759431294.7016] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 02 18:54:54 compute-0 systemd[1]: Reloaded Network Manager.
Oct 02 18:54:54 compute-0 sudo[56343]: pam_unix(sudo:session): session closed for user root
Oct 02 18:54:55 compute-0 sshd-session[48325]: Connection closed by 192.168.122.30 port 57958
Oct 02 18:54:55 compute-0 sshd-session[48322]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:54:55 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Oct 02 18:54:55 compute-0 systemd[1]: session-11.scope: Consumed 51.060s CPU time.
Oct 02 18:54:55 compute-0 systemd-logind[798]: Session 11 logged out. Waiting for processes to exit.
Oct 02 18:54:55 compute-0 systemd-logind[798]: Removed session 11.
Oct 02 18:55:04 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 18:55:05 compute-0 sshd-session[56383]: Accepted publickey for zuul from 192.168.122.30 port 41984 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 18:55:05 compute-0 systemd-logind[798]: New session 12 of user zuul.
Oct 02 18:55:05 compute-0 systemd[1]: Started Session 12 of User zuul.
Oct 02 18:55:05 compute-0 sshd-session[56383]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:55:06 compute-0 python3.9[56537]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:55:07 compute-0 python3.9[56691]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:55:08 compute-0 python3.9[56880]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:55:09 compute-0 sshd-session[56386]: Connection closed by 192.168.122.30 port 41984
Oct 02 18:55:09 compute-0 sshd-session[56383]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:55:09 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Oct 02 18:55:09 compute-0 systemd[1]: session-12.scope: Consumed 2.597s CPU time.
Oct 02 18:55:09 compute-0 systemd-logind[798]: Session 12 logged out. Waiting for processes to exit.
Oct 02 18:55:09 compute-0 systemd-logind[798]: Removed session 12.
Oct 02 18:55:15 compute-0 sshd-session[56908]: Accepted publickey for zuul from 192.168.122.30 port 38940 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 18:55:15 compute-0 systemd-logind[798]: New session 13 of user zuul.
Oct 02 18:55:15 compute-0 systemd[1]: Started Session 13 of User zuul.
Oct 02 18:55:15 compute-0 sshd-session[56908]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:55:16 compute-0 python3.9[57062]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:55:17 compute-0 python3.9[57216]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:55:17 compute-0 sudo[57370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlmrlhdohbgeynbgfubuqzjfwkgletdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431317.6117811-40-187271474975315/AnsiballZ_setup.py'
Oct 02 18:55:17 compute-0 sudo[57370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:18 compute-0 python3.9[57372]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:55:18 compute-0 sudo[57370]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:18 compute-0 sudo[57454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqtqwymqyiyuulogjswhjyixlxjlcbfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431317.6117811-40-187271474975315/AnsiballZ_dnf.py'
Oct 02 18:55:18 compute-0 sudo[57454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:19 compute-0 python3.9[57456]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:55:20 compute-0 sudo[57454]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:20 compute-0 sudo[57607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjevswlnufcgcrmdsxpbzwyhppfrrbmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431320.448017-52-157656313429553/AnsiballZ_setup.py'
Oct 02 18:55:20 compute-0 sudo[57607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:21 compute-0 python3.9[57609]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:55:21 compute-0 sudo[57607]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:22 compute-0 sudo[57798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxhnalcyblaagokwxcgqdvrzdlabkect ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431321.684555-63-15591270723439/AnsiballZ_file.py'
Oct 02 18:55:22 compute-0 sudo[57798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:22 compute-0 python3.9[57800]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:22 compute-0 sudo[57798]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:23 compute-0 sudo[57950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiilcmdmwampyrykdjofsacaaworyjyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431322.5896733-71-135721073081278/AnsiballZ_command.py'
Oct 02 18:55:23 compute-0 sudo[57950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:23 compute-0 python3.9[57952]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:55:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 18:55:23 compute-0 sudo[57950]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:24 compute-0 sudo[58112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwiyjumpyddvmbahhecnnyjvczwaezfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431323.5284426-79-279501845806472/AnsiballZ_stat.py'
Oct 02 18:55:24 compute-0 sudo[58112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:24 compute-0 python3.9[58114]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:24 compute-0 sudo[58112]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:24 compute-0 sudo[58190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpbvdyxxfjvqkjeyozjtbqnckllftxtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431323.5284426-79-279501845806472/AnsiballZ_file.py'
Oct 02 18:55:24 compute-0 sudo[58190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:24 compute-0 python3.9[58192]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:24 compute-0 sudo[58190]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:25 compute-0 sudo[58342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emmyemrmkxnnoxmzglsvyslsznpyaqwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431325.0075746-91-177865890418936/AnsiballZ_stat.py'
Oct 02 18:55:25 compute-0 sudo[58342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:25 compute-0 python3.9[58344]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:25 compute-0 sudo[58342]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:25 compute-0 sudo[58420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhwjzsodxctvdljqvkowbsrdunxvsiex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431325.0075746-91-177865890418936/AnsiballZ_file.py'
Oct 02 18:55:25 compute-0 sudo[58420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:26 compute-0 python3.9[58422]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:55:26 compute-0 sudo[58420]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:26 compute-0 sudo[58572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzfwbqtplctishopdofaligemqnyvfmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431326.3618686-104-12620460807712/AnsiballZ_ini_file.py'
Oct 02 18:55:26 compute-0 sudo[58572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:27 compute-0 python3.9[58574]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:55:27 compute-0 sudo[58572]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:27 compute-0 sudo[58724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yumuwhkclfzssmoeqldflxwnuuakeqhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431327.3013594-104-72603082936709/AnsiballZ_ini_file.py'
Oct 02 18:55:27 compute-0 sudo[58724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:27 compute-0 python3.9[58726]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:55:27 compute-0 sudo[58724]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:28 compute-0 sudo[58876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejzwjlhenvfgbmgyklpuxiglpdknscpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431328.0306268-104-106968050620832/AnsiballZ_ini_file.py'
Oct 02 18:55:28 compute-0 sudo[58876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:28 compute-0 python3.9[58878]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:55:28 compute-0 sudo[58876]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:29 compute-0 sudo[59028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqcgsfjqkuzyyyiacouxjesksqxrntyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431328.7197373-104-118610108269872/AnsiballZ_ini_file.py'
Oct 02 18:55:29 compute-0 sudo[59028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:29 compute-0 python3.9[59030]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:55:29 compute-0 sudo[59028]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:29 compute-0 sudo[59180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnivoxdqtoepordjnuyvgotcyhpspajt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431329.526959-135-64243439791444/AnsiballZ_dnf.py'
Oct 02 18:55:29 compute-0 sudo[59180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:30 compute-0 python3.9[59182]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:55:31 compute-0 sudo[59180]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:32 compute-0 sudo[59333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcoiwwdtzeaifyksvluxhsiovwabljlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431331.749932-146-128253027089231/AnsiballZ_setup.py'
Oct 02 18:55:32 compute-0 sudo[59333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:32 compute-0 python3.9[59335]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:55:32 compute-0 sudo[59333]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:32 compute-0 sudo[59487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtlrfaswnxecwrbkyislpbbgpzjcipzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431332.5607955-154-204547609628820/AnsiballZ_stat.py'
Oct 02 18:55:32 compute-0 sudo[59487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:33 compute-0 python3.9[59489]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:55:33 compute-0 sudo[59487]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:33 compute-0 sudo[59639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxdcsroqfjeefqqblrzkvywyumvpqujj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431333.3192978-163-214292817795018/AnsiballZ_stat.py'
Oct 02 18:55:33 compute-0 sudo[59639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:33 compute-0 python3.9[59641]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:55:33 compute-0 sudo[59639]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:34 compute-0 sudo[59791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inhigsimyuzrwvvyjqztebfyqvkmsmwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431334.078842-173-165019284451042/AnsiballZ_service_facts.py'
Oct 02 18:55:34 compute-0 sudo[59791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:34 compute-0 python3.9[59793]: ansible-service_facts Invoked
Oct 02 18:55:34 compute-0 network[59810]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 18:55:34 compute-0 network[59811]: 'network-scripts' will be removed from distribution in near future.
Oct 02 18:55:34 compute-0 network[59812]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 18:55:37 compute-0 sudo[59791]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:38 compute-0 sudo[60097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdsclerhdhjiutyckgkzgngioypvzklp ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759431338.3540795-186-182415017459756/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759431338.3540795-186-182415017459756/args'
Oct 02 18:55:38 compute-0 sudo[60097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:38 compute-0 sudo[60097]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:39 compute-0 sudo[60264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaepybzploxgikgdatgciymxmwxdjldr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431339.18197-197-271480306182177/AnsiballZ_dnf.py'
Oct 02 18:55:39 compute-0 sudo[60264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:39 compute-0 python3.9[60266]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:55:40 compute-0 sudo[60264]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:42 compute-0 sudo[60417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjrouvabhdaibwbzwrxmieqcmrcanwel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431341.2081013-210-174614933995985/AnsiballZ_package_facts.py'
Oct 02 18:55:42 compute-0 sudo[60417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:42 compute-0 python3.9[60419]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 02 18:55:42 compute-0 sudo[60417]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:43 compute-0 sudo[60569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzoxperabfrtizleiztubwnkxqkuyyaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431342.983488-220-160195226258609/AnsiballZ_stat.py'
Oct 02 18:55:43 compute-0 sudo[60569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:43 compute-0 python3.9[60571]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:43 compute-0 sudo[60569]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:44 compute-0 sudo[60694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amjcaxuxrquorfewrfmorvriqdqkamiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431342.983488-220-160195226258609/AnsiballZ_copy.py'
Oct 02 18:55:44 compute-0 sudo[60694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:44 compute-0 python3.9[60696]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431342.983488-220-160195226258609/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:44 compute-0 sudo[60694]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:45 compute-0 sudo[60848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irgbundpaajknowrqlkozsgswgrokner ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431344.6767926-235-109314633580097/AnsiballZ_stat.py'
Oct 02 18:55:45 compute-0 sudo[60848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:45 compute-0 python3.9[60850]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:55:45 compute-0 sudo[60848]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:45 compute-0 sudo[60973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kirztjvuyfhcgmkvpcuvncwvlfuarcph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431344.6767926-235-109314633580097/AnsiballZ_copy.py'
Oct 02 18:55:45 compute-0 sudo[60973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:45 compute-0 python3.9[60975]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431344.6767926-235-109314633580097/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:45 compute-0 sudo[60973]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:46 compute-0 sudo[61127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-micltirslrmvcfzzapxhkxnzufpdatik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431346.42722-256-21660341935936/AnsiballZ_lineinfile.py'
Oct 02 18:55:46 compute-0 sudo[61127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:47 compute-0 python3.9[61129]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:55:47 compute-0 sudo[61127]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:48 compute-0 sudo[61281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhgrorekrwrjxowtqszinnwhfznbxrks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431347.7818987-271-84730461836239/AnsiballZ_setup.py'
Oct 02 18:55:48 compute-0 sudo[61281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:48 compute-0 python3.9[61283]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:55:48 compute-0 sudo[61281]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:49 compute-0 sudo[61365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbzjcercyiufnyklkrttbckycojboead ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431347.7818987-271-84730461836239/AnsiballZ_systemd.py'
Oct 02 18:55:49 compute-0 sudo[61365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:49 compute-0 python3.9[61367]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:55:49 compute-0 sudo[61365]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:50 compute-0 sudo[61519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxzstiatqcrsvpbdmnzbaqesrtmturll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431350.1716077-287-22137822789179/AnsiballZ_setup.py'
Oct 02 18:55:50 compute-0 sudo[61519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:50 compute-0 python3.9[61521]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:55:51 compute-0 sudo[61519]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:51 compute-0 sudo[61603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smrubvewznodjkpcmuljraunpogqclqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431350.1716077-287-22137822789179/AnsiballZ_systemd.py'
Oct 02 18:55:51 compute-0 sudo[61603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:55:51 compute-0 python3.9[61605]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:55:51 compute-0 chronyd[810]: chronyd exiting
Oct 02 18:55:51 compute-0 systemd[1]: Stopping NTP client/server...
Oct 02 18:55:51 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Oct 02 18:55:51 compute-0 systemd[1]: Stopped NTP client/server.
Oct 02 18:55:51 compute-0 systemd[1]: Starting NTP client/server...
Oct 02 18:55:51 compute-0 chronyd[61614]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 02 18:55:51 compute-0 chronyd[61614]: Frequency -26.495 +/- 0.106 ppm read from /var/lib/chrony/drift
Oct 02 18:55:51 compute-0 chronyd[61614]: Loaded seccomp filter (level 2)
Oct 02 18:55:51 compute-0 systemd[1]: Started NTP client/server.
Oct 02 18:55:51 compute-0 sudo[61603]: pam_unix(sudo:session): session closed for user root
Oct 02 18:55:52 compute-0 sshd-session[56911]: Connection closed by 192.168.122.30 port 38940
Oct 02 18:55:52 compute-0 sshd-session[56908]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:55:52 compute-0 systemd-logind[798]: Session 13 logged out. Waiting for processes to exit.
Oct 02 18:55:52 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Oct 02 18:55:52 compute-0 systemd[1]: session-13.scope: Consumed 26.407s CPU time.
Oct 02 18:55:52 compute-0 systemd-logind[798]: Removed session 13.
Oct 02 18:55:57 compute-0 sshd-session[61640]: Accepted publickey for zuul from 192.168.122.30 port 48950 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 18:55:57 compute-0 systemd-logind[798]: New session 14 of user zuul.
Oct 02 18:55:57 compute-0 systemd[1]: Started Session 14 of User zuul.
Oct 02 18:55:57 compute-0 sshd-session[61640]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:55:58 compute-0 python3.9[61793]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:55:59 compute-0 sudo[61947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfqdtahgbmbomgymdjatpjhmotjpided ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431359.4012296-33-53570346757750/AnsiballZ_file.py'
Oct 02 18:55:59 compute-0 sudo[61947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:00 compute-0 python3.9[61949]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:00 compute-0 sudo[61947]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:00 compute-0 sudo[62122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-minewopqglisopgxdfkwyckjekcmfzfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431360.3023772-41-67735971721548/AnsiballZ_stat.py'
Oct 02 18:56:00 compute-0 sudo[62122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:01 compute-0 python3.9[62124]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:56:01 compute-0 sudo[62122]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:01 compute-0 sudo[62200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gblhbtkmpflhhgjfsjfvcsmqoniqvaal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431360.3023772-41-67735971721548/AnsiballZ_file.py'
Oct 02 18:56:01 compute-0 sudo[62200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:01 compute-0 python3.9[62202]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.jz75pqc_ recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:01 compute-0 sudo[62200]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:02 compute-0 sudo[62352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glhztkgiqecgzypwpzbyqkckmibospjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431361.9178247-61-175338028795080/AnsiballZ_stat.py'
Oct 02 18:56:02 compute-0 sudo[62352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:02 compute-0 python3.9[62354]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:56:02 compute-0 sudo[62352]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:03 compute-0 sudo[62475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbdtyjqctqtgmfatgquhjnpbrqbnxyqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431361.9178247-61-175338028795080/AnsiballZ_copy.py'
Oct 02 18:56:03 compute-0 sudo[62475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:03 compute-0 python3.9[62477]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431361.9178247-61-175338028795080/.source _original_basename=.e1owarl8 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:03 compute-0 sudo[62475]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:03 compute-0 sudo[62627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axnvicnzyyjxcyklobaddscqbfmllvvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431363.4589007-77-117297373141781/AnsiballZ_file.py'
Oct 02 18:56:03 compute-0 sudo[62627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:03 compute-0 python3.9[62629]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:56:04 compute-0 sudo[62627]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:04 compute-0 sudo[62779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgeuepudnyxffyrgikisixhitgmgjzda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431364.2013524-85-78199324816949/AnsiballZ_stat.py'
Oct 02 18:56:04 compute-0 sudo[62779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:04 compute-0 python3.9[62781]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:56:04 compute-0 sudo[62779]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:05 compute-0 sudo[62902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzkmaqqdswjlvzbdifcdjceyrrztiyzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431364.2013524-85-78199324816949/AnsiballZ_copy.py'
Oct 02 18:56:05 compute-0 sudo[62902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:05 compute-0 python3.9[62904]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431364.2013524-85-78199324816949/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:56:05 compute-0 sudo[62902]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:06 compute-0 sudo[63054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjvocoqyaavusaowiozyoidbmnkgmyhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431365.6840827-85-31375988350220/AnsiballZ_stat.py'
Oct 02 18:56:06 compute-0 sudo[63054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:06 compute-0 python3.9[63056]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:56:06 compute-0 sudo[63054]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:06 compute-0 sudo[63177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omaxogmxvjcjwjdqtbmfeaagenhlwuty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431365.6840827-85-31375988350220/AnsiballZ_copy.py'
Oct 02 18:56:06 compute-0 sudo[63177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:06 compute-0 python3.9[63179]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431365.6840827-85-31375988350220/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:56:06 compute-0 sudo[63177]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:07 compute-0 sudo[63329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqsbelocbabjvbmffqfaqxdoarcpcyhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431367.1188114-114-10815437605028/AnsiballZ_file.py'
Oct 02 18:56:07 compute-0 sudo[63329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:07 compute-0 python3.9[63331]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:07 compute-0 sudo[63329]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:08 compute-0 sudo[63481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lccqqpjcybhoqanlqmpazohbddclrruz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431367.8803732-122-223393146215332/AnsiballZ_stat.py'
Oct 02 18:56:08 compute-0 sudo[63481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:08 compute-0 python3.9[63483]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:56:08 compute-0 sudo[63481]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:08 compute-0 sudo[63604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwhtgviqvdajlmtsnefdtglfcgdqwayd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431367.8803732-122-223393146215332/AnsiballZ_copy.py'
Oct 02 18:56:08 compute-0 sudo[63604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:09 compute-0 python3.9[63606]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431367.8803732-122-223393146215332/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:09 compute-0 sudo[63604]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:09 compute-0 sudo[63756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruseqbxztnvhglxigxrlvcprjwcuapqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431369.349683-137-30120315339028/AnsiballZ_stat.py'
Oct 02 18:56:09 compute-0 sudo[63756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:09 compute-0 python3.9[63758]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:56:09 compute-0 sudo[63756]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:10 compute-0 sudo[63879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qipltqdxgvfbmkgwhjvbpnmretamoldq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431369.349683-137-30120315339028/AnsiballZ_copy.py'
Oct 02 18:56:10 compute-0 sudo[63879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:10 compute-0 python3.9[63881]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431369.349683-137-30120315339028/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:10 compute-0 sudo[63879]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:11 compute-0 sudo[64031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjzxqxsopwiegfrejhejzzbaefywlabv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431370.8880715-152-33494553578764/AnsiballZ_systemd.py'
Oct 02 18:56:11 compute-0 sudo[64031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:11 compute-0 python3.9[64033]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:56:11 compute-0 systemd[1]: Reloading.
Oct 02 18:56:11 compute-0 systemd-rc-local-generator[64062]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:56:11 compute-0 systemd-sysv-generator[64065]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:56:12 compute-0 systemd[1]: Reloading.
Oct 02 18:56:12 compute-0 systemd-rc-local-generator[64099]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:56:12 compute-0 systemd-sysv-generator[64103]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:56:12 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Oct 02 18:56:12 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Oct 02 18:56:12 compute-0 sudo[64031]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:12 compute-0 sudo[64260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phfdflzyhnfuxrtvbfoswcezsdxqgbkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431372.479954-160-267235252861359/AnsiballZ_stat.py'
Oct 02 18:56:12 compute-0 sudo[64260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:12 compute-0 python3.9[64262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:56:12 compute-0 sudo[64260]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:13 compute-0 sudo[64383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unnlnoxvgzkgltavykujtklljgtzzwfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431372.479954-160-267235252861359/AnsiballZ_copy.py'
Oct 02 18:56:13 compute-0 sudo[64383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:13 compute-0 python3.9[64385]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431372.479954-160-267235252861359/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:13 compute-0 sudo[64383]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:14 compute-0 sudo[64535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgqbmbvksbpmvckticlpbmyuqwcqtcoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431373.754315-175-179578940940082/AnsiballZ_stat.py'
Oct 02 18:56:14 compute-0 sudo[64535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:14 compute-0 python3.9[64537]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:56:14 compute-0 sudo[64535]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:14 compute-0 sudo[64658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jillvpiodsmgvhumtezgibppesodtzbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431373.754315-175-179578940940082/AnsiballZ_copy.py'
Oct 02 18:56:14 compute-0 sudo[64658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:14 compute-0 python3.9[64660]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431373.754315-175-179578940940082/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:14 compute-0 sudo[64658]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:15 compute-0 sudo[64810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukmxcmwcmcranopzykpswyyfermqpkjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431375.05681-190-92801556438440/AnsiballZ_systemd.py'
Oct 02 18:56:15 compute-0 sudo[64810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:15 compute-0 python3.9[64812]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:56:15 compute-0 systemd[1]: Reloading.
Oct 02 18:56:15 compute-0 systemd-sysv-generator[64843]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:56:15 compute-0 systemd-rc-local-generator[64839]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:56:17 compute-0 systemd[1]: Reloading.
Oct 02 18:56:17 compute-0 systemd-sysv-generator[64880]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:56:17 compute-0 systemd-rc-local-generator[64876]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:56:17 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 18:56:17 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 18:56:17 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 18:56:17 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 18:56:17 compute-0 sudo[64810]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:18 compute-0 python3.9[65039]: ansible-ansible.builtin.service_facts Invoked
Oct 02 18:56:18 compute-0 network[65056]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 18:56:18 compute-0 network[65057]: 'network-scripts' will be removed from distribution in near future.
Oct 02 18:56:18 compute-0 network[65058]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 18:56:23 compute-0 sudo[65320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlomhtszvnjkoonyewogpkkvgokwsvpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431383.5846543-206-41877273909831/AnsiballZ_systemd.py'
Oct 02 18:56:23 compute-0 sudo[65320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:24 compute-0 python3.9[65322]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:56:24 compute-0 systemd[1]: Reloading.
Oct 02 18:56:24 compute-0 systemd-rc-local-generator[65350]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:56:24 compute-0 systemd-sysv-generator[65354]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:56:24 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 02 18:56:24 compute-0 iptables.init[65362]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 02 18:56:24 compute-0 iptables.init[65362]: iptables: Flushing firewall rules: [  OK  ]
Oct 02 18:56:24 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Oct 02 18:56:24 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 02 18:56:24 compute-0 sudo[65320]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:25 compute-0 sudo[65557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvounzkbprwnznbhzqtuurcncgbgxpnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431385.046454-206-15968593979109/AnsiballZ_systemd.py'
Oct 02 18:56:25 compute-0 sudo[65557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:25 compute-0 python3.9[65559]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:56:25 compute-0 sudo[65557]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:26 compute-0 sudo[65711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxjxnymnrtqzwcyhhkxcdjxvzoobhaog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431385.9879098-222-151409388468749/AnsiballZ_systemd.py'
Oct 02 18:56:26 compute-0 sudo[65711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:26 compute-0 python3.9[65713]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:56:26 compute-0 systemd[1]: Reloading.
Oct 02 18:56:26 compute-0 systemd-rc-local-generator[65743]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:56:26 compute-0 systemd-sysv-generator[65747]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:56:27 compute-0 systemd[1]: Starting Netfilter Tables...
Oct 02 18:56:27 compute-0 systemd[1]: Finished Netfilter Tables.
Oct 02 18:56:27 compute-0 sudo[65711]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:27 compute-0 sudo[65903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amrjqmfsdqmhpvknthcmcxjmvanxjtte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431387.3067768-230-265846917027085/AnsiballZ_command.py'
Oct 02 18:56:27 compute-0 sudo[65903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:28 compute-0 python3.9[65905]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:56:28 compute-0 sudo[65903]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:28 compute-0 sudo[66056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vopyhwkpclwupqlvjvujxvyskqpyaglu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431388.4904735-244-255584862091999/AnsiballZ_stat.py'
Oct 02 18:56:28 compute-0 sudo[66056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:29 compute-0 python3.9[66058]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:56:29 compute-0 sudo[66056]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:29 compute-0 sudo[66181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjwgdrtpfiemexqgjkyftfebyunfijew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431388.4904735-244-255584862091999/AnsiballZ_copy.py'
Oct 02 18:56:29 compute-0 sudo[66181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:56:29 compute-0 python3.9[66183]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431388.4904735-244-255584862091999/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:56:29 compute-0 sudo[66181]: pam_unix(sudo:session): session closed for user root
Oct 02 18:56:30 compute-0 python3.9[66334]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:56:30 compute-0 polkitd[6312]: Registered Authentication Agent for unix-process:66336:255142 (system bus name :1.553 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 18:56:55 compute-0 polkit-agent-helper-1[66348]: pam_unix(polkit-1:auth): conversation failed
Oct 02 18:56:55 compute-0 polkit-agent-helper-1[66348]: pam_unix(polkit-1:auth): auth could not identify password for [root]
Oct 02 18:56:55 compute-0 polkitd[6312]: Unregistered Authentication Agent for unix-process:66336:255142 (system bus name :1.553, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 18:56:55 compute-0 polkitd[6312]: Operator of unix-process:66336:255142 FAILED to authenticate to gain authorization for action org.freedesktop.systemd1.manage-units for system-bus-name::1.552 [<unknown>] (owned by unix-user:zuul)
Oct 02 18:56:56 compute-0 sshd-session[61643]: Connection closed by 192.168.122.30 port 48950
Oct 02 18:56:56 compute-0 sshd-session[61640]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:56:56 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Oct 02 18:56:56 compute-0 systemd[1]: session-14.scope: Consumed 21.265s CPU time.
Oct 02 18:56:56 compute-0 systemd-logind[798]: Session 14 logged out. Waiting for processes to exit.
Oct 02 18:56:56 compute-0 systemd-logind[798]: Removed session 14.
Oct 02 18:57:08 compute-0 sshd-session[66374]: Received disconnect from 193.46.255.244 port 47908:11:  [preauth]
Oct 02 18:57:08 compute-0 sshd-session[66374]: Disconnected from authenticating user root 193.46.255.244 port 47908 [preauth]
Oct 02 18:57:11 compute-0 sshd-session[66376]: Accepted publickey for zuul from 192.168.122.30 port 38592 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 18:57:11 compute-0 systemd-logind[798]: New session 15 of user zuul.
Oct 02 18:57:11 compute-0 systemd[1]: Started Session 15 of User zuul.
Oct 02 18:57:11 compute-0 sshd-session[66376]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:57:12 compute-0 python3.9[66529]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:57:13 compute-0 sudo[66683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofbzmojhpmfkeljqbyssjznmyrxytfnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431432.5829337-33-199126444024759/AnsiballZ_file.py'
Oct 02 18:57:13 compute-0 sudo[66683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:13 compute-0 python3.9[66685]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:13 compute-0 sudo[66683]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:13 compute-0 sudo[66858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxdqyntknzktiinrhgbsgwoemlkgnuqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431433.4690692-41-5347218707317/AnsiballZ_stat.py'
Oct 02 18:57:13 compute-0 sudo[66858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:14 compute-0 python3.9[66860]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:14 compute-0 sudo[66858]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:14 compute-0 sudo[66936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqatsdfrfzqcavemhtqighuqffabwdrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431433.4690692-41-5347218707317/AnsiballZ_file.py'
Oct 02 18:57:14 compute-0 sudo[66936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:14 compute-0 python3.9[66938]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.fcmaai7_ recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:14 compute-0 sudo[66936]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:15 compute-0 sudo[67088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muqssjjnvpiqtlwqzdqayvpmunxzsqdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431434.9519887-61-73073351562259/AnsiballZ_stat.py'
Oct 02 18:57:15 compute-0 sudo[67088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:15 compute-0 python3.9[67090]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:15 compute-0 sudo[67088]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:15 compute-0 sudo[67166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dinebsubhovyxlbspiqylwcicvuxbeyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431434.9519887-61-73073351562259/AnsiballZ_file.py'
Oct 02 18:57:15 compute-0 sudo[67166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:16 compute-0 python3.9[67168]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.yzb8vhg2 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:16 compute-0 sudo[67166]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:16 compute-0 sudo[67318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rquixrcejtpszlcmoduxwiedckkakbrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431436.3053842-74-7907139154769/AnsiballZ_file.py'
Oct 02 18:57:16 compute-0 sudo[67318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:16 compute-0 python3.9[67320]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:57:16 compute-0 sudo[67318]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:17 compute-0 sudo[67470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnchorjrjvcyxutkapplkrgremqwowet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431437.0333788-82-27799557243268/AnsiballZ_stat.py'
Oct 02 18:57:17 compute-0 sudo[67470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:17 compute-0 python3.9[67472]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:17 compute-0 sudo[67470]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:17 compute-0 sudo[67548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwsvenamwfdxkzuphvkhpkwsggdbfcvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431437.0333788-82-27799557243268/AnsiballZ_file.py'
Oct 02 18:57:17 compute-0 sudo[67548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:18 compute-0 python3.9[67550]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:57:18 compute-0 sudo[67548]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:18 compute-0 sudo[67700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fheqkoxjfnrpxlncdgnyydhorpkjaxbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431438.2614655-82-191832096844476/AnsiballZ_stat.py'
Oct 02 18:57:18 compute-0 sudo[67700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:18 compute-0 python3.9[67702]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:18 compute-0 sudo[67700]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:19 compute-0 sudo[67778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmqiteqeifwrasblwgvrwqewkryulohc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431438.2614655-82-191832096844476/AnsiballZ_file.py'
Oct 02 18:57:19 compute-0 sudo[67778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:19 compute-0 python3.9[67780]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:57:19 compute-0 sudo[67778]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:19 compute-0 sudo[67930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxbkngkawnohgsbvitqhbqhhhaqyvhbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431439.5883405-105-88089912944181/AnsiballZ_file.py'
Oct 02 18:57:19 compute-0 sudo[67930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:20 compute-0 python3.9[67932]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:20 compute-0 sudo[67930]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:20 compute-0 sudo[68082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaqkzcwvicisfhrsvaueoojbgluboloq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431440.3915403-113-161127054726847/AnsiballZ_stat.py'
Oct 02 18:57:20 compute-0 sudo[68082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:20 compute-0 python3.9[68084]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:20 compute-0 sudo[68082]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:21 compute-0 sudo[68160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssnaltfoemogwfqlaobazkpcysdtxsbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431440.3915403-113-161127054726847/AnsiballZ_file.py'
Oct 02 18:57:21 compute-0 sudo[68160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:21 compute-0 python3.9[68162]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:21 compute-0 sudo[68160]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:21 compute-0 sudo[68312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgcfapfkzauamtoahnwgwrstyhxfmrgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431441.6309526-125-68220288649948/AnsiballZ_stat.py'
Oct 02 18:57:21 compute-0 sudo[68312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:22 compute-0 python3.9[68314]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:22 compute-0 sudo[68312]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:22 compute-0 sudo[68390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpitwozfghodslhmemzaaodmhhnfhqkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431441.6309526-125-68220288649948/AnsiballZ_file.py'
Oct 02 18:57:22 compute-0 sudo[68390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:22 compute-0 python3.9[68392]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:22 compute-0 sudo[68390]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:23 compute-0 sudo[68542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blntvixfnztdlcdqedtqbsymdsvcuthb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431442.8890176-137-259973275214005/AnsiballZ_systemd.py'
Oct 02 18:57:23 compute-0 sudo[68542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:23 compute-0 python3.9[68544]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:57:23 compute-0 systemd[1]: Reloading.
Oct 02 18:57:24 compute-0 systemd-rc-local-generator[68573]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:57:24 compute-0 systemd-sysv-generator[68578]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:57:24 compute-0 sudo[68542]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:24 compute-0 sudo[68732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wycpvjlyprabrzhbxonmvcgsdkvtqlta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431444.414131-145-63778495767225/AnsiballZ_stat.py'
Oct 02 18:57:24 compute-0 sudo[68732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:24 compute-0 python3.9[68734]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:24 compute-0 sudo[68732]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:25 compute-0 sudo[68810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iumyyrwqxzouogmoztuhhzcwexgjgefz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431444.414131-145-63778495767225/AnsiballZ_file.py'
Oct 02 18:57:25 compute-0 sudo[68810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:25 compute-0 python3.9[68812]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:25 compute-0 sudo[68810]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:26 compute-0 sudo[68962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lufzufcppcjukwcaaqcusxpxuxuxuake ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431445.7444365-157-78024565550522/AnsiballZ_stat.py'
Oct 02 18:57:26 compute-0 sudo[68962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:26 compute-0 python3.9[68964]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:26 compute-0 sudo[68962]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:26 compute-0 sudo[69040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-numufgwwkiprsbefhqstmbouupoxljgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431445.7444365-157-78024565550522/AnsiballZ_file.py'
Oct 02 18:57:26 compute-0 sudo[69040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:26 compute-0 python3.9[69042]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:26 compute-0 sudo[69040]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:27 compute-0 sshd-session[69167]: banner exchange: Connection from 195.178.110.109 port 36332: invalid format
Oct 02 18:57:27 compute-0 sudo[69193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upbdhdvwybjipjfvnlxhyvydmmyofien ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431447.0282543-169-8429417967456/AnsiballZ_systemd.py'
Oct 02 18:57:27 compute-0 sudo[69193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:27 compute-0 sshd-session[69196]: banner exchange: Connection from 195.178.110.109 port 36334: invalid format
Oct 02 18:57:27 compute-0 python3.9[69195]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 18:57:27 compute-0 systemd[1]: Reloading.
Oct 02 18:57:27 compute-0 systemd-sysv-generator[69225]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 18:57:27 compute-0 systemd-rc-local-generator[69220]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 18:57:28 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 18:57:28 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 18:57:28 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 18:57:28 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 18:57:28 compute-0 sudo[69193]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:29 compute-0 python3.9[69386]: ansible-ansible.builtin.service_facts Invoked
Oct 02 18:57:29 compute-0 network[69403]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 18:57:29 compute-0 network[69404]: 'network-scripts' will be removed from distribution in near future.
Oct 02 18:57:29 compute-0 network[69405]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 18:57:34 compute-0 sudo[69666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjmwickwjqzzxwihoqqzhhevwynxxpni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431454.1110213-195-98944716387542/AnsiballZ_stat.py'
Oct 02 18:57:34 compute-0 sudo[69666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:34 compute-0 python3.9[69668]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:34 compute-0 sudo[69666]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:34 compute-0 sudo[69744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbvasrpreqjwzaycyfwmkodgwwtejjyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431454.1110213-195-98944716387542/AnsiballZ_file.py'
Oct 02 18:57:34 compute-0 sudo[69744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:35 compute-0 python3.9[69746]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:35 compute-0 sudo[69744]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:35 compute-0 sudo[69896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktxpiexjergqfkalreewhzswvosoqqrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431455.476582-208-19117966221230/AnsiballZ_file.py'
Oct 02 18:57:35 compute-0 sudo[69896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:36 compute-0 python3.9[69898]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:36 compute-0 sudo[69896]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:36 compute-0 sudo[70048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqokpoixlsxivsqlxrexpzasoownbcam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431456.3634791-216-57591385413168/AnsiballZ_stat.py'
Oct 02 18:57:36 compute-0 sudo[70048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:36 compute-0 python3.9[70050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:36 compute-0 sudo[70048]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:37 compute-0 sudo[70171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnewcpliewixknwzilfjkbqdkputpuii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431456.3634791-216-57591385413168/AnsiballZ_copy.py'
Oct 02 18:57:37 compute-0 sudo[70171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:37 compute-0 python3.9[70173]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431456.3634791-216-57591385413168/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:37 compute-0 sudo[70171]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:38 compute-0 sudo[70323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zewznzpnugdelorhyxwnrpzxiydxvxbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431457.9017165-234-281336028814809/AnsiballZ_timezone.py'
Oct 02 18:57:38 compute-0 sudo[70323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:38 compute-0 python3.9[70325]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 02 18:57:38 compute-0 systemd[1]: Starting Time & Date Service...
Oct 02 18:57:38 compute-0 systemd[1]: Started Time & Date Service.
Oct 02 18:57:39 compute-0 sudo[70323]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:39 compute-0 sudo[70479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toixqsrsohmbnadniaimfggdwerrguvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431459.3000166-243-72023719743133/AnsiballZ_file.py'
Oct 02 18:57:39 compute-0 sudo[70479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:39 compute-0 python3.9[70481]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:39 compute-0 sudo[70479]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:40 compute-0 sudo[70631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nulqkyccdgjnssqyodhsdbznwdbprymh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431460.0424738-251-211582024842118/AnsiballZ_stat.py'
Oct 02 18:57:40 compute-0 sudo[70631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:40 compute-0 python3.9[70633]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:40 compute-0 sudo[70631]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:41 compute-0 sudo[70754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujycbmxczlwwtkbqolzcutftugwwgmwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431460.0424738-251-211582024842118/AnsiballZ_copy.py'
Oct 02 18:57:41 compute-0 sudo[70754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:41 compute-0 python3.9[70756]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431460.0424738-251-211582024842118/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:41 compute-0 sudo[70754]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:41 compute-0 sudo[70906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igjdtcskqkzhbwspwvcqdfhqopueogyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431461.5077157-266-16038953924014/AnsiballZ_stat.py'
Oct 02 18:57:41 compute-0 sudo[70906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:42 compute-0 python3.9[70908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:42 compute-0 sudo[70906]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:42 compute-0 sudo[71029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzhmasdoygtobtjfluawdzwdbhfwqoud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431461.5077157-266-16038953924014/AnsiballZ_copy.py'
Oct 02 18:57:42 compute-0 sudo[71029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:42 compute-0 python3.9[71031]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431461.5077157-266-16038953924014/.source.yaml _original_basename=.5rj1l9d_ follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:42 compute-0 sudo[71029]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:43 compute-0 sudo[71181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfgxxwwzzybfnmkckzdntatbiyfxxask ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431462.8902984-281-178951643314443/AnsiballZ_stat.py'
Oct 02 18:57:43 compute-0 sudo[71181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:43 compute-0 python3.9[71183]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:43 compute-0 sudo[71181]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:43 compute-0 sudo[71304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alpxeemwpedcwswqrjzuvtjljkvdznsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431462.8902984-281-178951643314443/AnsiballZ_copy.py'
Oct 02 18:57:43 compute-0 sudo[71304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:44 compute-0 python3.9[71306]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431462.8902984-281-178951643314443/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:44 compute-0 sudo[71304]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:44 compute-0 sudo[71456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-catkealwwcwktwnrhawmpovlkpwlnplw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431464.2869606-296-102215255577659/AnsiballZ_command.py'
Oct 02 18:57:44 compute-0 sudo[71456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:45 compute-0 python3.9[71458]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:57:45 compute-0 sudo[71456]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:45 compute-0 sudo[71609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uztrgojjksybkpiofdkkxibcvwvbazgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431465.2288694-304-278147851805967/AnsiballZ_command.py'
Oct 02 18:57:45 compute-0 sudo[71609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:45 compute-0 python3.9[71611]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:57:45 compute-0 sudo[71609]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:46 compute-0 sudo[71762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egdegbvzaxqtuoiuqcnxttbkoalveumw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431466.029597-312-191227158824020/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 18:57:46 compute-0 sudo[71762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:46 compute-0 python3[71764]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 18:57:46 compute-0 sudo[71762]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:47 compute-0 sudo[71914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpqydnsxiyimnlfxueusositopjzmkmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431467.0129423-320-48714024853097/AnsiballZ_stat.py'
Oct 02 18:57:47 compute-0 sudo[71914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:47 compute-0 python3.9[71916]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:47 compute-0 sudo[71914]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:48 compute-0 sudo[72037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgoxpkmqosuagcfvelxdmycibcdrkjjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431467.0129423-320-48714024853097/AnsiballZ_copy.py'
Oct 02 18:57:48 compute-0 sudo[72037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:48 compute-0 python3.9[72039]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431467.0129423-320-48714024853097/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:48 compute-0 sudo[72037]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:48 compute-0 sudo[72189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjbczwerxkyeyhjykvtsxvpizclyoerb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431468.43078-335-26004558551838/AnsiballZ_stat.py'
Oct 02 18:57:48 compute-0 sudo[72189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:48 compute-0 python3.9[72191]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:49 compute-0 sudo[72189]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:49 compute-0 sudo[72312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jishacsxptxunsziqdeiggzkdjzzmlpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431468.43078-335-26004558551838/AnsiballZ_copy.py'
Oct 02 18:57:49 compute-0 sudo[72312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:49 compute-0 python3.9[72314]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431468.43078-335-26004558551838/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:49 compute-0 sudo[72312]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:50 compute-0 sudo[72464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dokkucfpwskkyhhrnrrnlgexqisbewvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431469.8425791-350-97887644082173/AnsiballZ_stat.py'
Oct 02 18:57:50 compute-0 sudo[72464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:50 compute-0 python3.9[72466]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:50 compute-0 sudo[72464]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:50 compute-0 sudo[72587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufaazncyuuigrpqnuvnmgmduunqwumei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431469.8425791-350-97887644082173/AnsiballZ_copy.py'
Oct 02 18:57:50 compute-0 sudo[72587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:50 compute-0 python3.9[72589]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431469.8425791-350-97887644082173/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:50 compute-0 sudo[72587]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:51 compute-0 sudo[72739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auvlssawjxmcjoyhyqhorkkdiykvnbum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431471.1053715-365-170246704470320/AnsiballZ_stat.py'
Oct 02 18:57:51 compute-0 sudo[72739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:51 compute-0 python3.9[72741]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:51 compute-0 sudo[72739]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:52 compute-0 sudo[72862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikwotqjpyoejrvbtcjolqcvhflezapub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431471.1053715-365-170246704470320/AnsiballZ_copy.py'
Oct 02 18:57:52 compute-0 sudo[72862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:52 compute-0 python3.9[72864]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431471.1053715-365-170246704470320/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:52 compute-0 sudo[72862]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:53 compute-0 sudo[73014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srfyawczerxreyhvmsvcnjzfvewkugnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431472.5571356-380-60761043461354/AnsiballZ_stat.py'
Oct 02 18:57:53 compute-0 sudo[73014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:53 compute-0 python3.9[73016]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:57:53 compute-0 sudo[73014]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:53 compute-0 sudo[73137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulidygeplydiqeefbsqizgyhaabsqycc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431472.5571356-380-60761043461354/AnsiballZ_copy.py'
Oct 02 18:57:53 compute-0 sudo[73137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:54 compute-0 python3.9[73139]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431472.5571356-380-60761043461354/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:54 compute-0 sudo[73137]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:54 compute-0 sudo[73289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrikkvbzcdtapsmkrqwtdlwxwvhcbdyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431474.3009233-395-73523254628562/AnsiballZ_file.py'
Oct 02 18:57:54 compute-0 sudo[73289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:54 compute-0 python3.9[73291]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:54 compute-0 sudo[73289]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:55 compute-0 sudo[73441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrfwvonqhcslgkhxjgiighuriblbsicj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431475.0216198-403-94244350323572/AnsiballZ_command.py'
Oct 02 18:57:55 compute-0 sudo[73441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:55 compute-0 python3.9[73443]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:57:55 compute-0 sudo[73441]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:56 compute-0 sudo[73601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlawfjwlhyvebzcykxdmhjcsvphqotvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431475.9276435-411-110930717881755/AnsiballZ_blockinfile.py'
Oct 02 18:57:56 compute-0 sudo[73601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:56 compute-0 python3.9[73603]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:56 compute-0 sudo[73601]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:57 compute-0 sudo[73754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsebvheajxgfckqwmafttsobgwivyqrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431476.9294121-420-192141504146802/AnsiballZ_file.py'
Oct 02 18:57:57 compute-0 sudo[73754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:57 compute-0 python3.9[73756]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:57 compute-0 sudo[73754]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:57 compute-0 sudo[73906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cicrvevncmwepjaqnsamawfuqjpcuvqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431477.661537-420-264365613731637/AnsiballZ_file.py'
Oct 02 18:57:57 compute-0 sudo[73906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:58 compute-0 python3.9[73908]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:57:58 compute-0 sudo[73906]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:59 compute-0 sudo[74058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcbcvnvttwnmcezlceshrlhtjwhfewbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431478.5447972-435-111470298465555/AnsiballZ_mount.py'
Oct 02 18:57:59 compute-0 sudo[74058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:57:59 compute-0 python3.9[74060]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 18:57:59 compute-0 sudo[74058]: pam_unix(sudo:session): session closed for user root
Oct 02 18:57:59 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 18:57:59 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 18:57:59 compute-0 sudo[74212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmevaindylhbdiysqkllxiqhwykkdknv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431479.4739912-435-222157996167748/AnsiballZ_mount.py'
Oct 02 18:57:59 compute-0 sudo[74212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:00 compute-0 python3.9[74214]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 18:58:00 compute-0 sudo[74212]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:00 compute-0 sshd-session[66379]: Connection closed by 192.168.122.30 port 38592
Oct 02 18:58:00 compute-0 sshd-session[66376]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:58:00 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Oct 02 18:58:00 compute-0 systemd[1]: session-15.scope: Consumed 36.286s CPU time.
Oct 02 18:58:00 compute-0 systemd-logind[798]: Session 15 logged out. Waiting for processes to exit.
Oct 02 18:58:00 compute-0 systemd-logind[798]: Removed session 15.
Oct 02 18:58:01 compute-0 chronyd[61614]: Selected source 51.222.12.92 (pool.ntp.org)
Oct 02 18:58:06 compute-0 sshd-session[74240]: Accepted publickey for zuul from 192.168.122.30 port 55154 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 18:58:06 compute-0 systemd-logind[798]: New session 16 of user zuul.
Oct 02 18:58:06 compute-0 systemd[1]: Started Session 16 of User zuul.
Oct 02 18:58:06 compute-0 sshd-session[74240]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:58:06 compute-0 sudo[74393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owwxtjlsfoqtxgvcjebawafeqfqwblfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431486.4398699-16-272667286393837/AnsiballZ_tempfile.py'
Oct 02 18:58:06 compute-0 sudo[74393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:07 compute-0 python3.9[74395]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 02 18:58:07 compute-0 sudo[74393]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:07 compute-0 sudo[74545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxrbjknsoqhpoifbrzhsetfswrzywdix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431487.2951376-28-207692685234107/AnsiballZ_stat.py'
Oct 02 18:58:07 compute-0 sudo[74545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:07 compute-0 python3.9[74547]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:58:08 compute-0 sudo[74545]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:08 compute-0 sudo[74697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozxdgdtwlyoeulqnfkvzpzfevhxaqqpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431488.2209892-38-86193184117387/AnsiballZ_setup.py'
Oct 02 18:58:08 compute-0 sudo[74697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:09 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 18:58:09 compute-0 python3.9[74699]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:58:09 compute-0 sudo[74697]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:09 compute-0 sudo[74852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhfitiuzfbcgrtfywwifoktntdrlutxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431489.4368286-47-47762966843460/AnsiballZ_blockinfile.py'
Oct 02 18:58:09 compute-0 sudo[74852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:10 compute-0 python3.9[74854]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC21k3Hr5vvvvY+VJPp8vH1zCZUOvRDIbRIT/Znk5Tua6EPcQQHjClYuQFDBDYXVBDp1js3WgAp76lcWZHpQ0bBpXJ9mKBbH5NXp/VP4PcLj1QdEYiM0DMDHqOdfRkqtXXImqhZ6ymRVBm3Efwgwc7Yl9gWYwVM712OtkXG+ek/jeAAK7/sotEYN9B1UpcJzoxJQtbgWpWKM3yTLByKTf9+zBwA9irdJeRuxUbvBwxAUgPo8AocYZWKCeOoCzBwMGn8t25tZxz3YLNY2jBfBtXkVbvBi3zeFqhzqmLD7KmqJz4Pqqo2qbIPvdF47REd0Exy8+DMLdwWvXD4ETS8PJhiWoG/CTxZIUXJOE1wcTR+6spLzpfYFFMu01d8gf6k1qlOm+1Qr65yt3GP0R1rHR69xapge+JFvUQx6xJbiRA0R0fBfZTvO59D6/BWTcFtBIaxSouZ0PrfjCrCxQiyYnw11v+6jF/W583xy1PRydS4e98p05X3LRadc08VhfJZhLU=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILKu9erH8Fzv43uTsy2iSNJ9OsnKGen24J1hp5+ESZHS
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC/lvYA+dp8DxFpNdWA8Xr5Ttihdu51cYG7CBN/BbakToZH7H4I0SAoljyUVSc6rArIAQTxWNYkH4GA3qRUQOcE=
                                             create=True mode=0644 path=/tmp/ansible.bv_08cg8 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:58:10 compute-0 sudo[74852]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:10 compute-0 sudo[75004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olreqbqtfkriymjltfermvhbxuepaqin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431490.4718657-55-78075329307730/AnsiballZ_command.py'
Oct 02 18:58:10 compute-0 sudo[75004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:11 compute-0 python3.9[75006]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.bv_08cg8' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:58:11 compute-0 sudo[75004]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:11 compute-0 sudo[75158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkawwniiqudhbypnfjsmkacpkalmygza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431491.3703227-63-189766726773657/AnsiballZ_file.py'
Oct 02 18:58:11 compute-0 sudo[75158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:12 compute-0 python3.9[75160]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.bv_08cg8 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:58:12 compute-0 sudo[75158]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:12 compute-0 sshd-session[74243]: Connection closed by 192.168.122.30 port 55154
Oct 02 18:58:12 compute-0 sshd-session[74240]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:58:12 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Oct 02 18:58:12 compute-0 systemd[1]: session-16.scope: Consumed 3.928s CPU time.
Oct 02 18:58:12 compute-0 systemd-logind[798]: Session 16 logged out. Waiting for processes to exit.
Oct 02 18:58:12 compute-0 systemd-logind[798]: Removed session 16.
Oct 02 18:58:18 compute-0 sshd-session[75185]: Accepted publickey for zuul from 192.168.122.30 port 35104 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 18:58:18 compute-0 systemd-logind[798]: New session 17 of user zuul.
Oct 02 18:58:18 compute-0 systemd[1]: Started Session 17 of User zuul.
Oct 02 18:58:18 compute-0 sshd-session[75185]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:58:19 compute-0 python3.9[75338]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:58:20 compute-0 sudo[75492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvyvkptxjrpelhnmbmjsflkqimsiwhel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431500.0356333-32-141612481509023/AnsiballZ_systemd.py'
Oct 02 18:58:20 compute-0 sudo[75492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:21 compute-0 python3.9[75494]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 02 18:58:21 compute-0 sudo[75492]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:21 compute-0 sudo[75646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thmwltosysshhzpyrquhxaqbgubsvsjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431501.3029919-40-245048210615263/AnsiballZ_systemd.py'
Oct 02 18:58:21 compute-0 sudo[75646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:21 compute-0 python3.9[75648]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 18:58:21 compute-0 sudo[75646]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:22 compute-0 sudo[75799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfgzcyrnxxfichdlpgiltftxpprsnwqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431502.1592329-49-13968063048123/AnsiballZ_command.py'
Oct 02 18:58:22 compute-0 sudo[75799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:22 compute-0 python3.9[75801]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:58:22 compute-0 sudo[75799]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:23 compute-0 sudo[75952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csltcelyccdxzxddaasvaqvvyakpwbba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431503.0262241-57-110032802889519/AnsiballZ_stat.py'
Oct 02 18:58:23 compute-0 sudo[75952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:23 compute-0 python3.9[75954]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:58:23 compute-0 sudo[75952]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:24 compute-0 sudo[76106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znjtlrzxapejwpcsxjmeusiietjaawxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431503.8727524-65-114671921717563/AnsiballZ_command.py'
Oct 02 18:58:24 compute-0 sudo[76106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:24 compute-0 python3.9[76108]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:58:24 compute-0 sudo[76106]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:25 compute-0 sudo[76261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwsdgmypkovkvhyyhdlcotzuyzyyffur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431504.6016822-73-40449309203456/AnsiballZ_file.py'
Oct 02 18:58:25 compute-0 sudo[76261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:25 compute-0 python3.9[76263]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:58:25 compute-0 sudo[76261]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:25 compute-0 sshd-session[75188]: Connection closed by 192.168.122.30 port 35104
Oct 02 18:58:25 compute-0 sshd-session[75185]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:58:25 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Oct 02 18:58:25 compute-0 systemd[1]: session-17.scope: Consumed 4.927s CPU time.
Oct 02 18:58:25 compute-0 systemd-logind[798]: Session 17 logged out. Waiting for processes to exit.
Oct 02 18:58:25 compute-0 systemd-logind[798]: Removed session 17.
Oct 02 18:58:32 compute-0 sshd-session[76288]: Accepted publickey for zuul from 192.168.122.30 port 38528 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 18:58:32 compute-0 systemd-logind[798]: New session 18 of user zuul.
Oct 02 18:58:32 compute-0 systemd[1]: Started Session 18 of User zuul.
Oct 02 18:58:32 compute-0 sshd-session[76288]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:58:33 compute-0 python3.9[76441]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:58:34 compute-0 sudo[76595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oomcuttynusfoblshpqhkycswhlhfmqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431513.8282547-34-280943713787190/AnsiballZ_setup.py'
Oct 02 18:58:34 compute-0 sudo[76595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:34 compute-0 python3.9[76597]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:58:34 compute-0 sudo[76595]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:35 compute-0 sudo[76679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atrfbcthelnhcdvjiqhefpzjlspwxuqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431513.8282547-34-280943713787190/AnsiballZ_dnf.py'
Oct 02 18:58:35 compute-0 sudo[76679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:35 compute-0 python3.9[76681]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 18:58:36 compute-0 sudo[76679]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:37 compute-0 python3.9[76832]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:58:38 compute-0 python3.9[76983]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 18:58:39 compute-0 python3.9[77133]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:58:40 compute-0 python3.9[77283]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 18:58:40 compute-0 sshd-session[76291]: Connection closed by 192.168.122.30 port 38528
Oct 02 18:58:40 compute-0 sshd-session[76288]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:58:40 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Oct 02 18:58:40 compute-0 systemd[1]: session-18.scope: Consumed 5.909s CPU time.
Oct 02 18:58:40 compute-0 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Oct 02 18:58:40 compute-0 systemd-logind[798]: Removed session 18.
Oct 02 18:58:46 compute-0 sshd-session[77308]: Accepted publickey for zuul from 192.168.122.30 port 46844 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 18:58:46 compute-0 systemd-logind[798]: New session 19 of user zuul.
Oct 02 18:58:46 compute-0 systemd[1]: Started Session 19 of User zuul.
Oct 02 18:58:46 compute-0 sshd-session[77308]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:58:48 compute-0 python3.9[77461]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:58:49 compute-0 sudo[77615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdggoupltziypgguyjqsimfjkodskpch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431529.3770554-50-115846347786229/AnsiballZ_file.py'
Oct 02 18:58:49 compute-0 sudo[77615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:50 compute-0 python3.9[77617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:58:50 compute-0 sudo[77615]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:50 compute-0 sudo[77767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkrhgpuvqbxbrkmditvchhbduglmkmxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431530.2632551-50-150125784263223/AnsiballZ_file.py'
Oct 02 18:58:50 compute-0 sudo[77767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:50 compute-0 python3.9[77769]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:58:50 compute-0 sudo[77767]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:51 compute-0 sudo[77919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbfqotqokacbmrttzettnrjjmxasovtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431531.017487-65-225526972168017/AnsiballZ_stat.py'
Oct 02 18:58:51 compute-0 sudo[77919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:51 compute-0 python3.9[77921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:58:51 compute-0 sudo[77919]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:52 compute-0 sudo[78042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zezthdetjhtphrsgqnradxqbyqyclynq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431531.017487-65-225526972168017/AnsiballZ_copy.py'
Oct 02 18:58:52 compute-0 sudo[78042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:52 compute-0 python3.9[78044]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431531.017487-65-225526972168017/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=598b936f2c4cac07a8cf621c6c9aa5e467d42a94 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:58:52 compute-0 sudo[78042]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:52 compute-0 sudo[78194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogkcektcnxeufivjlavnirxydplndkpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431532.6110058-65-196252642525933/AnsiballZ_stat.py'
Oct 02 18:58:52 compute-0 sudo[78194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:53 compute-0 python3.9[78196]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:58:53 compute-0 sudo[78194]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:53 compute-0 sudo[78317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrkimdvnceayyllqjxvisuesccoitxlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431532.6110058-65-196252642525933/AnsiballZ_copy.py'
Oct 02 18:58:53 compute-0 sudo[78317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:53 compute-0 python3.9[78319]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431532.6110058-65-196252642525933/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=80da1424d4adf0f0a9676171e02692dc048d6205 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:58:53 compute-0 sudo[78317]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:54 compute-0 sudo[78469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prasztdczltecidpgpgweimaemznhevz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431533.9002986-65-142188861889380/AnsiballZ_stat.py'
Oct 02 18:58:54 compute-0 sudo[78469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:54 compute-0 python3.9[78471]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:58:54 compute-0 sudo[78469]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:54 compute-0 sudo[78592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orxydmhmyzmgrtdhkseqouipejbxvubg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431533.9002986-65-142188861889380/AnsiballZ_copy.py'
Oct 02 18:58:54 compute-0 sudo[78592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:55 compute-0 python3.9[78594]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431533.9002986-65-142188861889380/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1433f43463d9f67195ff1ef6ed7920e8ad8fd605 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:58:55 compute-0 sudo[78592]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:55 compute-0 sudo[78744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujrbqttlomnuwevcxjliupesmoaadykd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431535.4475508-109-73715065741978/AnsiballZ_file.py'
Oct 02 18:58:55 compute-0 sudo[78744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:55 compute-0 python3.9[78746]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:58:55 compute-0 sudo[78744]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:56 compute-0 sudo[78896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldcseidptymmfagmechzwouvgprxbimf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431536.094085-109-225885131031219/AnsiballZ_file.py'
Oct 02 18:58:56 compute-0 sudo[78896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:56 compute-0 python3.9[78898]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:58:56 compute-0 sudo[78896]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:57 compute-0 sudo[79048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yptzignhoujaxejwvcbgmolmqnzlndfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431536.8896964-124-118373318234373/AnsiballZ_stat.py'
Oct 02 18:58:57 compute-0 sudo[79048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:57 compute-0 python3.9[79050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:58:57 compute-0 sudo[79048]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:57 compute-0 sudo[79171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqaaprxymbnqlbduqubtsrejwatchita ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431536.8896964-124-118373318234373/AnsiballZ_copy.py'
Oct 02 18:58:57 compute-0 sudo[79171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:58 compute-0 python3.9[79173]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431536.8896964-124-118373318234373/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4c0e7192846d98dc61b442815cfb029bf0404546 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:58:58 compute-0 sudo[79171]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:58 compute-0 sudo[79323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhnbroalihdglogtbsdcghhfgwynzyzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431538.1976392-124-140696085946527/AnsiballZ_stat.py'
Oct 02 18:58:58 compute-0 sudo[79323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:58 compute-0 python3.9[79325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:58:58 compute-0 sudo[79323]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:59 compute-0 sudo[79446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wykviuxrfwwazoaekezakagvkkflryrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431538.1976392-124-140696085946527/AnsiballZ_copy.py'
Oct 02 18:58:59 compute-0 sudo[79446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:58:59 compute-0 python3.9[79448]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431538.1976392-124-140696085946527/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=80da1424d4adf0f0a9676171e02692dc048d6205 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:58:59 compute-0 sudo[79446]: pam_unix(sudo:session): session closed for user root
Oct 02 18:58:59 compute-0 sudo[79598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koegzcneusagqmembdfegzhrixzkporz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431539.5177948-124-208946889859444/AnsiballZ_stat.py'
Oct 02 18:58:59 compute-0 sudo[79598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:00 compute-0 python3.9[79600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:00 compute-0 sudo[79598]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:00 compute-0 sudo[79721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aynvyikwerxpcuzfhvjfrrpazqkfdkxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431539.5177948-124-208946889859444/AnsiballZ_copy.py'
Oct 02 18:59:00 compute-0 sudo[79721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:00 compute-0 python3.9[79723]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431539.5177948-124-208946889859444/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=eb147dbe8f31d26a2707cefe69e4468f2a20309e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:00 compute-0 sudo[79721]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:01 compute-0 sudo[79873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gayvzebmghhbjqesfjgcgacwefsikabt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431541.019604-168-153197949319919/AnsiballZ_file.py'
Oct 02 18:59:01 compute-0 sudo[79873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:01 compute-0 python3.9[79875]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:01 compute-0 sudo[79873]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:02 compute-0 sudo[80025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfysvcvjtroazmhmeumakapgvpvdoyib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431541.6994607-168-231911020059982/AnsiballZ_file.py'
Oct 02 18:59:02 compute-0 sudo[80025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:02 compute-0 python3.9[80027]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:02 compute-0 sudo[80025]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:02 compute-0 sudo[80177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cluzpsvvgapdpmamyfyyngverptijuzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431542.5327897-183-140433726130855/AnsiballZ_stat.py'
Oct 02 18:59:02 compute-0 sudo[80177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:03 compute-0 python3.9[80179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:03 compute-0 sudo[80177]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:03 compute-0 sudo[80300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzoxqvbofxxkimdvywqdtrgtndeteagk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431542.5327897-183-140433726130855/AnsiballZ_copy.py'
Oct 02 18:59:03 compute-0 sudo[80300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:03 compute-0 python3.9[80302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431542.5327897-183-140433726130855/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7dc7a3298513771d46e26dddf3aeb20af50f1e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:03 compute-0 sudo[80300]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:04 compute-0 sudo[80452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfmcorxrndoojvoqwpjkkhqdylmfkvlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431543.8638725-183-219268496231393/AnsiballZ_stat.py'
Oct 02 18:59:04 compute-0 sudo[80452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:04 compute-0 python3.9[80454]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:04 compute-0 sudo[80452]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:04 compute-0 sudo[80575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnguctqrnvovfzyofzdepvbuqmspwonw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431543.8638725-183-219268496231393/AnsiballZ_copy.py'
Oct 02 18:59:04 compute-0 sudo[80575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:05 compute-0 python3.9[80577]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431543.8638725-183-219268496231393/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=de84d5c883e215fe1d0d1b6b439e1ad4def69f19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:05 compute-0 sudo[80575]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:05 compute-0 sudo[80727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reyoscctwouktkdlmoomkdxavrgoksmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431545.273762-183-16662240236374/AnsiballZ_stat.py'
Oct 02 18:59:05 compute-0 sudo[80727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:05 compute-0 python3.9[80729]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:05 compute-0 sudo[80727]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:06 compute-0 sudo[80850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pclngmngugnwzgxdlisbzyvsarfkvjsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431545.273762-183-16662240236374/AnsiballZ_copy.py'
Oct 02 18:59:06 compute-0 sudo[80850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:06 compute-0 python3.9[80852]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431545.273762-183-16662240236374/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=599f1ffb6b9f3ba0a0d70190f17123e53c7d8af5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:06 compute-0 sudo[80850]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:07 compute-0 sudo[81002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywltslfswpviktcuktfrssjvowmupnoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431546.8165238-227-150273825580852/AnsiballZ_file.py'
Oct 02 18:59:07 compute-0 sudo[81002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:07 compute-0 python3.9[81004]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:07 compute-0 sudo[81002]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:07 compute-0 sudo[81154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icsjihdzopdurucaprdvyffssxpbwifp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431547.545454-227-115440354169181/AnsiballZ_file.py'
Oct 02 18:59:07 compute-0 sudo[81154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:08 compute-0 python3.9[81156]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:08 compute-0 sudo[81154]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:08 compute-0 sudo[81306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozlzxshzvkuzvsdxpocwartxfyldgdtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431548.3742502-242-199880660910744/AnsiballZ_stat.py'
Oct 02 18:59:08 compute-0 sudo[81306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:08 compute-0 python3.9[81308]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:08 compute-0 sudo[81306]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:09 compute-0 sudo[81429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuhjqfqeyazojkqkdtzddppulhdhzgkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431548.3742502-242-199880660910744/AnsiballZ_copy.py'
Oct 02 18:59:09 compute-0 sudo[81429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:09 compute-0 python3.9[81431]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431548.3742502-242-199880660910744/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b34fd629ff4b0d7c2778bc04b350445c476f5c41 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:09 compute-0 sudo[81429]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:10 compute-0 sudo[81581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubcttehrzctlkimsiwahvzyvpxcuhxft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431549.7535787-242-72675545904525/AnsiballZ_stat.py'
Oct 02 18:59:10 compute-0 sudo[81581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:10 compute-0 python3.9[81583]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:10 compute-0 sudo[81581]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:10 compute-0 sudo[81704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtdugaviwhpzlvzncmytpjgikudnvbqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431549.7535787-242-72675545904525/AnsiballZ_copy.py'
Oct 02 18:59:10 compute-0 sudo[81704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:10 compute-0 python3.9[81706]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431549.7535787-242-72675545904525/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=623decd98a8e5437a0ae30438bd6d113d99b097d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:10 compute-0 sudo[81704]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:11 compute-0 sudo[81856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbucfbjvacjcyceuizorjuneadslanfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431551.1410837-242-262384215103781/AnsiballZ_stat.py'
Oct 02 18:59:11 compute-0 sudo[81856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:11 compute-0 python3.9[81858]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:11 compute-0 sudo[81856]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:12 compute-0 sudo[81979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whuprbprncrugldwckjytfbietvknaqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431551.1410837-242-262384215103781/AnsiballZ_copy.py'
Oct 02 18:59:12 compute-0 sudo[81979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:12 compute-0 python3.9[81981]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431551.1410837-242-262384215103781/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=8df48e9e9c7ecaa321c85190a69d4063ca5d2ca3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:12 compute-0 sudo[81979]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:13 compute-0 sudo[82131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzqcqpkhyuvsrtmyohwfrmzfkoamdaxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431552.6187103-286-179364934792818/AnsiballZ_file.py'
Oct 02 18:59:13 compute-0 sudo[82131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:13 compute-0 python3.9[82133]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:13 compute-0 sudo[82131]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:13 compute-0 sudo[82283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nopnoleiccpquwdypbpgtgojhyijwayp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431553.3829103-286-201621353955406/AnsiballZ_file.py'
Oct 02 18:59:13 compute-0 sudo[82283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:13 compute-0 python3.9[82285]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:13 compute-0 sudo[82283]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:14 compute-0 sudo[82435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llsusxaboxqwqsvgqyglbdqwssmvfjkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431554.1304207-301-167446682326871/AnsiballZ_stat.py'
Oct 02 18:59:14 compute-0 sudo[82435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:14 compute-0 python3.9[82437]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:14 compute-0 sudo[82435]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:15 compute-0 sudo[82558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuhkqypwbuxbciafxitpktnuwsucccuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431554.1304207-301-167446682326871/AnsiballZ_copy.py'
Oct 02 18:59:15 compute-0 sudo[82558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:15 compute-0 python3.9[82560]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431554.1304207-301-167446682326871/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=3a2873de201403d5063060748ebd04c916db8180 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:15 compute-0 sudo[82558]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:15 compute-0 sudo[82710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ismqimmmrmkzjktxwhfpomeeoghnggmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431555.4804823-301-159498210175207/AnsiballZ_stat.py'
Oct 02 18:59:15 compute-0 sudo[82710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:16 compute-0 python3.9[82712]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:16 compute-0 sudo[82710]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:16 compute-0 sudo[82833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgsaaupmwwddzmvdiwovppgwyvxjvhho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431555.4804823-301-159498210175207/AnsiballZ_copy.py'
Oct 02 18:59:16 compute-0 sudo[82833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:16 compute-0 python3.9[82835]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431555.4804823-301-159498210175207/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=de84d5c883e215fe1d0d1b6b439e1ad4def69f19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:16 compute-0 sudo[82833]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:17 compute-0 sudo[82985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkgrrugifikcxmsqpuhsidgmlesguhbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431556.8175468-301-17427359980574/AnsiballZ_stat.py'
Oct 02 18:59:17 compute-0 sudo[82985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:17 compute-0 python3.9[82987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:17 compute-0 sudo[82985]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:17 compute-0 sudo[83108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzjothgzkamgwgumovssvlzqfdateuim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431556.8175468-301-17427359980574/AnsiballZ_copy.py'
Oct 02 18:59:17 compute-0 sudo[83108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:18 compute-0 python3.9[83110]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431556.8175468-301-17427359980574/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a1108d188d81b923be978166c2ab5bcb71090e7e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:18 compute-0 sudo[83108]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:19 compute-0 sudo[83260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfcvzwuhuchksfcagfraunqdifkupkfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431558.970448-361-238143363478042/AnsiballZ_file.py'
Oct 02 18:59:19 compute-0 sudo[83260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:19 compute-0 python3.9[83262]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:19 compute-0 sudo[83260]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:19 compute-0 sudo[83412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uazbqdgxwcwjnvmhknbyzvusnuqdhyzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431559.6498191-369-268437637544204/AnsiballZ_stat.py'
Oct 02 18:59:19 compute-0 sudo[83412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:20 compute-0 python3.9[83414]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:20 compute-0 sudo[83412]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:20 compute-0 sudo[83535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbferjyfmdplritlxyynovwtkaxmgyef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431559.6498191-369-268437637544204/AnsiballZ_copy.py'
Oct 02 18:59:20 compute-0 sudo[83535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:20 compute-0 python3.9[83537]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431559.6498191-369-268437637544204/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=042424fcc498cb89df7270ccf3ebde10882bbe94 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:20 compute-0 sudo[83535]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:21 compute-0 sudo[83687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmejovyfkxjvyhklerhgpjbkkyjuckst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431560.9353106-385-105349368987579/AnsiballZ_file.py'
Oct 02 18:59:21 compute-0 sudo[83687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:21 compute-0 python3.9[83689]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:21 compute-0 sudo[83687]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:21 compute-0 sudo[83839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apnctafdnawimzpqcqgjruagypaaypdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431561.6402218-393-181814705882968/AnsiballZ_stat.py'
Oct 02 18:59:21 compute-0 sudo[83839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:22 compute-0 python3.9[83841]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:22 compute-0 sudo[83839]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:22 compute-0 sudo[83962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvyvdgepukstqbpmlqpepnhujlijkaor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431561.6402218-393-181814705882968/AnsiballZ_copy.py'
Oct 02 18:59:22 compute-0 sudo[83962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:22 compute-0 python3.9[83964]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431561.6402218-393-181814705882968/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=042424fcc498cb89df7270ccf3ebde10882bbe94 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:22 compute-0 sudo[83962]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:23 compute-0 sudo[84114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suhwzarozxqwxrzjmapgfrgctzfemebz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431563.1028762-409-145450847167750/AnsiballZ_file.py'
Oct 02 18:59:23 compute-0 sudo[84114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:23 compute-0 python3.9[84116]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:23 compute-0 sudo[84114]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:24 compute-0 sudo[84266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtolzsyilgvsvobniutvoauzakhfmzyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431563.8709395-417-268462075348916/AnsiballZ_stat.py'
Oct 02 18:59:24 compute-0 sudo[84266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:24 compute-0 python3.9[84268]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:24 compute-0 sudo[84266]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:24 compute-0 sudo[84389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzikpdupgkeauvpcdxhtvdqzwxeymbfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431563.8709395-417-268462075348916/AnsiballZ_copy.py'
Oct 02 18:59:24 compute-0 sudo[84389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:24 compute-0 python3.9[84391]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431563.8709395-417-268462075348916/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=042424fcc498cb89df7270ccf3ebde10882bbe94 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:25 compute-0 sudo[84389]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:25 compute-0 sudo[84541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzojmwcljpebewkglicvwaqnvldtyjjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431565.1974864-433-198214036879628/AnsiballZ_file.py'
Oct 02 18:59:25 compute-0 sudo[84541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:25 compute-0 python3.9[84543]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:25 compute-0 sudo[84541]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:26 compute-0 sudo[84693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilfjmrmxkjjjfkbnfqiqsjggsvvfyvso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431565.9672852-441-228101887838899/AnsiballZ_stat.py'
Oct 02 18:59:26 compute-0 sudo[84693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:26 compute-0 python3.9[84695]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:26 compute-0 sudo[84693]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:26 compute-0 sudo[84816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jynsshxffrijjihuiustqtumfltywgyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431565.9672852-441-228101887838899/AnsiballZ_copy.py'
Oct 02 18:59:26 compute-0 sudo[84816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:27 compute-0 python3.9[84818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431565.9672852-441-228101887838899/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=042424fcc498cb89df7270ccf3ebde10882bbe94 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:27 compute-0 sudo[84816]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:27 compute-0 sudo[84968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vygdhyxisihyevdkhhxyfujypxavotro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431567.472344-457-72781336427798/AnsiballZ_file.py'
Oct 02 18:59:27 compute-0 sudo[84968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:28 compute-0 python3.9[84970]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:28 compute-0 sudo[84968]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:28 compute-0 sudo[85120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibqclqgdtwqbuzjiffogmciqqblrdgdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431568.3788643-465-186342544541282/AnsiballZ_stat.py'
Oct 02 18:59:28 compute-0 sudo[85120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:28 compute-0 python3.9[85122]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:28 compute-0 sudo[85120]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:29 compute-0 sudo[85243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nywguqcarylnrwojmhtfglkijdsozsps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431568.3788643-465-186342544541282/AnsiballZ_copy.py'
Oct 02 18:59:29 compute-0 sudo[85243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:29 compute-0 python3.9[85245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431568.3788643-465-186342544541282/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=042424fcc498cb89df7270ccf3ebde10882bbe94 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:29 compute-0 sudo[85243]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:30 compute-0 sudo[85395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbcesrvxydtdyzgntyvkozklvufxzipg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431569.941932-481-8132643639250/AnsiballZ_file.py'
Oct 02 18:59:30 compute-0 sudo[85395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:30 compute-0 python3.9[85397]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:30 compute-0 sudo[85395]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:31 compute-0 sudo[85547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aexycbsuuffvpwiuuhhjjpxvjffhqgyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431570.8325446-489-60132505376619/AnsiballZ_stat.py'
Oct 02 18:59:31 compute-0 sudo[85547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:31 compute-0 python3.9[85549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:31 compute-0 sudo[85547]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:31 compute-0 sudo[85670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-annnbencwvsxlbtbzwbzboqmqngtpqwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431570.8325446-489-60132505376619/AnsiballZ_copy.py'
Oct 02 18:59:31 compute-0 sudo[85670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:32 compute-0 python3.9[85672]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431570.8325446-489-60132505376619/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=042424fcc498cb89df7270ccf3ebde10882bbe94 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:32 compute-0 sudo[85670]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:32 compute-0 sudo[85822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ungzzpktuknxuusjyuuovfvuwodwmqtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431572.4472115-505-65224245049558/AnsiballZ_file.py'
Oct 02 18:59:32 compute-0 sudo[85822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:33 compute-0 python3.9[85824]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:33 compute-0 sudo[85822]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:33 compute-0 PackageKit[31240]: daemon quit
Oct 02 18:59:33 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 18:59:33 compute-0 sudo[85974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzabqdvmmmjfshimwtwjypzyryvdgbpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431573.3419232-513-15191406671263/AnsiballZ_stat.py'
Oct 02 18:59:33 compute-0 sudo[85974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:33 compute-0 python3.9[85976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:33 compute-0 sudo[85974]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:34 compute-0 sudo[86097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlwljknuwhhpqshpqxrvqduowzstfxhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431573.3419232-513-15191406671263/AnsiballZ_copy.py'
Oct 02 18:59:34 compute-0 sudo[86097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:34 compute-0 python3.9[86099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431573.3419232-513-15191406671263/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=042424fcc498cb89df7270ccf3ebde10882bbe94 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:34 compute-0 sudo[86097]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:35 compute-0 sudo[86249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpxzxmjvvaakhcpgwkalnahszxubhfrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431574.8765025-529-152621667060807/AnsiballZ_file.py'
Oct 02 18:59:35 compute-0 sudo[86249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:35 compute-0 python3.9[86251]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:35 compute-0 sudo[86249]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:36 compute-0 sudo[86401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdticyijspywxqgrfsusnermyshoindo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431575.9358294-537-205699268978268/AnsiballZ_stat.py'
Oct 02 18:59:36 compute-0 sudo[86401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:36 compute-0 python3.9[86403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:36 compute-0 sudo[86401]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:36 compute-0 sudo[86524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phmlechnygrubjlxalxozcdnblluerxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431575.9358294-537-205699268978268/AnsiballZ_copy.py'
Oct 02 18:59:36 compute-0 sudo[86524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:37 compute-0 python3.9[86526]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431575.9358294-537-205699268978268/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=042424fcc498cb89df7270ccf3ebde10882bbe94 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:37 compute-0 sudo[86524]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:37 compute-0 sshd-session[77311]: Connection closed by 192.168.122.30 port 46844
Oct 02 18:59:37 compute-0 sshd-session[77308]: pam_unix(sshd:session): session closed for user zuul
Oct 02 18:59:37 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Oct 02 18:59:37 compute-0 systemd[1]: session-19.scope: Consumed 39.976s CPU time.
Oct 02 18:59:37 compute-0 systemd-logind[798]: Session 19 logged out. Waiting for processes to exit.
Oct 02 18:59:37 compute-0 systemd-logind[798]: Removed session 19.
Oct 02 18:59:42 compute-0 sshd-session[86551]: Accepted publickey for zuul from 192.168.122.30 port 56634 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 18:59:42 compute-0 systemd-logind[798]: New session 20 of user zuul.
Oct 02 18:59:42 compute-0 systemd[1]: Started Session 20 of User zuul.
Oct 02 18:59:42 compute-0 sshd-session[86551]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 18:59:43 compute-0 python3.9[86704]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:59:44 compute-0 sudo[86858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoovvzvqylrrhuojcuuzzlbcnsjuiqhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431584.2927327-34-10444062465221/AnsiballZ_file.py'
Oct 02 18:59:44 compute-0 sudo[86858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:44 compute-0 python3.9[86860]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:45 compute-0 sudo[86858]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:45 compute-0 sudo[87010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvptowkogfvnsrbkbjrxonowhtkhihqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431585.1599195-34-1004704547846/AnsiballZ_file.py'
Oct 02 18:59:45 compute-0 sudo[87010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:45 compute-0 python3.9[87012]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 18:59:45 compute-0 sudo[87010]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:46 compute-0 python3.9[87162]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 18:59:47 compute-0 sudo[87312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fosqfhesatkhoaqbhltlizwqkvqnytsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431586.6281533-57-58620500321720/AnsiballZ_seboolean.py'
Oct 02 18:59:47 compute-0 sudo[87312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:47 compute-0 python3.9[87314]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 02 18:59:48 compute-0 sudo[87312]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:49 compute-0 sudo[87470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpkuhezkonghaxglawgsbgwwtqtgfnwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431588.7947276-67-70738426860891/AnsiballZ_setup.py'
Oct 02 18:59:49 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 02 18:59:49 compute-0 sudo[87470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:49 compute-0 python3.9[87472]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 18:59:49 compute-0 sudo[87470]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:50 compute-0 sudo[87554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arjimvxgmvnqmiujalhoxpcogmdxilxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431588.7947276-67-70738426860891/AnsiballZ_dnf.py'
Oct 02 18:59:50 compute-0 sudo[87554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:50 compute-0 python3.9[87556]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 18:59:51 compute-0 sudo[87554]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:52 compute-0 sudo[87707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkmaibkatbvikyvxtqladcfiechrowqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431591.7815719-79-152719573615012/AnsiballZ_systemd.py'
Oct 02 18:59:52 compute-0 sudo[87707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:52 compute-0 python3.9[87709]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 18:59:52 compute-0 sudo[87707]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:53 compute-0 sudo[87862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvpoxcdfqzwwxzgrkafchlxspmebnpyk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431592.82926-87-26314665703091/AnsiballZ_edpm_nftables_snippet.py'
Oct 02 18:59:53 compute-0 sudo[87862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:53 compute-0 python3[87864]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 02 18:59:53 compute-0 sudo[87862]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:54 compute-0 sudo[88014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeokkpdcausnzpgxsyiddlmxvrrvocfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431593.8137627-96-276173545525769/AnsiballZ_file.py'
Oct 02 18:59:54 compute-0 sudo[88014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:54 compute-0 python3.9[88016]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:54 compute-0 sudo[88014]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:55 compute-0 sudo[88166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prhnytlrxgjeljpbocmxjfhtkqhveicw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431594.5673506-104-141495065609484/AnsiballZ_stat.py'
Oct 02 18:59:55 compute-0 sudo[88166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:55 compute-0 python3.9[88168]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:55 compute-0 sudo[88166]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:55 compute-0 sudo[88244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-worxfmpeammpodzbeyioypaxhcbyiadj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431594.5673506-104-141495065609484/AnsiballZ_file.py'
Oct 02 18:59:55 compute-0 sudo[88244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:55 compute-0 python3.9[88246]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:55 compute-0 sudo[88244]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:56 compute-0 sudo[88396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdmbcldxfmcllxmrzfruhtsxmrkclnyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431595.9951303-116-174713231905998/AnsiballZ_stat.py'
Oct 02 18:59:56 compute-0 sudo[88396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:56 compute-0 python3.9[88398]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:56 compute-0 sudo[88396]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:56 compute-0 sudo[88474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuuumwfwuzkrevqertnemgioekgtwobl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431595.9951303-116-174713231905998/AnsiballZ_file.py'
Oct 02 18:59:56 compute-0 sudo[88474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:57 compute-0 python3.9[88476]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8pl_wwuk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:57 compute-0 sudo[88474]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:57 compute-0 sudo[88626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggxaoneeqtnhjaozktmdwjpfnppkftpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431597.2803977-128-271894170894424/AnsiballZ_stat.py'
Oct 02 18:59:57 compute-0 sudo[88626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:57 compute-0 python3.9[88628]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 18:59:57 compute-0 sudo[88626]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:58 compute-0 sudo[88704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwourimvestvjvctfygcyslfxpdqhpfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431597.2803977-128-271894170894424/AnsiballZ_file.py'
Oct 02 18:59:58 compute-0 sudo[88704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:58 compute-0 python3.9[88706]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 18:59:58 compute-0 sudo[88704]: pam_unix(sudo:session): session closed for user root
Oct 02 18:59:59 compute-0 sudo[88856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fugmoxkakjriaaxdfsvdxmbwfewsmiut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431598.6478245-141-80317941904437/AnsiballZ_command.py'
Oct 02 18:59:59 compute-0 sudo[88856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 18:59:59 compute-0 python3.9[88858]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 18:59:59 compute-0 sudo[88856]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:00 compute-0 sudo[89009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odixepshuvwgllkkzjpmbqdzzfpeqbsu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431599.667503-149-212847698928342/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 19:00:00 compute-0 sudo[89009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:00 compute-0 python3[89011]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 19:00:00 compute-0 sudo[89009]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:00 compute-0 sudo[89161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tafclrmzdbjkezkgsdklruluwsejjohf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431600.572932-157-189770410832777/AnsiballZ_stat.py'
Oct 02 19:00:00 compute-0 sudo[89161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:01 compute-0 python3.9[89163]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:01 compute-0 sudo[89161]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:01 compute-0 sudo[89286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwmvwlyfpbspwvtpinvijwatlxpgwcqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431600.572932-157-189770410832777/AnsiballZ_copy.py'
Oct 02 19:00:01 compute-0 sudo[89286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:01 compute-0 python3.9[89288]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431600.572932-157-189770410832777/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:01 compute-0 sudo[89286]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:02 compute-0 sudo[89438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbprmgabylubnokkskspnxuyrblpmoat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431602.1820436-172-187849122067100/AnsiballZ_stat.py'
Oct 02 19:00:02 compute-0 sudo[89438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:02 compute-0 python3.9[89440]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:02 compute-0 sudo[89438]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:03 compute-0 sudo[89563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnlpsopeavzswszjhkwwbltqyflnggif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431602.1820436-172-187849122067100/AnsiballZ_copy.py'
Oct 02 19:00:03 compute-0 sudo[89563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:03 compute-0 python3.9[89565]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431602.1820436-172-187849122067100/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:03 compute-0 sudo[89563]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:03 compute-0 sudo[89715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkrhdchwlwiofxfblyafvifwotsylrxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431603.5187545-187-220365670199663/AnsiballZ_stat.py'
Oct 02 19:00:03 compute-0 sudo[89715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:04 compute-0 python3.9[89717]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:04 compute-0 sudo[89715]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:04 compute-0 sudo[89840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnztdwdeoiywmtwolrgwbcohjxhynttm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431603.5187545-187-220365670199663/AnsiballZ_copy.py'
Oct 02 19:00:04 compute-0 sudo[89840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:04 compute-0 python3.9[89842]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431603.5187545-187-220365670199663/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:04 compute-0 sudo[89840]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:05 compute-0 sudo[89992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tijbonbctqtazizcxttswdbayfsdxest ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431604.8977659-202-47918269675319/AnsiballZ_stat.py'
Oct 02 19:00:05 compute-0 sudo[89992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:05 compute-0 python3.9[89994]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:05 compute-0 sudo[89992]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:05 compute-0 sudo[90117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxzxjmngblcmwnkaofwyvhvztyqkpaxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431604.8977659-202-47918269675319/AnsiballZ_copy.py'
Oct 02 19:00:05 compute-0 sudo[90117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:06 compute-0 python3.9[90119]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431604.8977659-202-47918269675319/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:06 compute-0 sudo[90117]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:06 compute-0 sudo[90269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkpfgkzogcbftganwsszwfvgfkyyuudr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431606.338595-217-80176999376709/AnsiballZ_stat.py'
Oct 02 19:00:06 compute-0 sudo[90269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:06 compute-0 python3.9[90271]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:07 compute-0 sudo[90269]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:07 compute-0 sudo[90394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csadotjzgxyovddpahmhcxfehiemrkah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431606.338595-217-80176999376709/AnsiballZ_copy.py'
Oct 02 19:00:07 compute-0 sudo[90394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:07 compute-0 python3.9[90396]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431606.338595-217-80176999376709/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:07 compute-0 sudo[90394]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:08 compute-0 sudo[90546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igmbqxxeswwyusidlcqjhlfkvvtxkjpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431607.7318432-232-29091325335713/AnsiballZ_file.py'
Oct 02 19:00:08 compute-0 sudo[90546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:08 compute-0 python3.9[90548]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:08 compute-0 sudo[90546]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:08 compute-0 sudo[90698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agfwmceieigyadtrqqadhbekebycrzaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431608.4374533-240-172818506588743/AnsiballZ_command.py'
Oct 02 19:00:08 compute-0 sudo[90698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:08 compute-0 python3.9[90700]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:00:09 compute-0 sudo[90698]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:09 compute-0 sudo[90853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqiojoojngjxxswspduuvvgljdutbeow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431609.172149-248-233396958924654/AnsiballZ_blockinfile.py'
Oct 02 19:00:09 compute-0 sudo[90853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:09 compute-0 python3.9[90855]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:09 compute-0 sudo[90853]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:10 compute-0 sudo[91005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyprmfgwohrpounfbznfrruvwxwsqwbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431610.1758327-257-105856627431968/AnsiballZ_command.py'
Oct 02 19:00:10 compute-0 sudo[91005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:10 compute-0 python3.9[91007]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:00:10 compute-0 sudo[91005]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:11 compute-0 sudo[91158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzbcwnkkflujwnxjartlsdidsrywyldn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431610.8866332-265-259537112435421/AnsiballZ_stat.py'
Oct 02 19:00:11 compute-0 sudo[91158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:11 compute-0 python3.9[91160]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:00:11 compute-0 sudo[91158]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:11 compute-0 sudo[91312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfojaiyjzmavbvqtlnidelmxzkeuutdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431611.6340644-273-3298845724053/AnsiballZ_command.py'
Oct 02 19:00:11 compute-0 sudo[91312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:12 compute-0 python3.9[91314]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:00:12 compute-0 sudo[91312]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:12 compute-0 sudo[91467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnygxylctzbkazpwwpacacvchohziznr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431612.3379169-281-243136652558773/AnsiballZ_file.py'
Oct 02 19:00:12 compute-0 sudo[91467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:12 compute-0 python3.9[91469]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:12 compute-0 sudo[91467]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:14 compute-0 python3.9[91619]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:00:14 compute-0 sudo[91770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tigtbofvvjfxrnbthvfdcyljrgrsccxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431614.6608148-321-276177086651229/AnsiballZ_command.py'
Oct 02 19:00:14 compute-0 sudo[91770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:15 compute-0 python3.9[91772]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:00:15 compute-0 ovs-vsctl[91773]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 02 19:00:15 compute-0 sudo[91770]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:15 compute-0 sudo[91923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uklefhfjktsmzbitauzjwfeghfbckthf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431615.3873723-330-76294872538464/AnsiballZ_command.py'
Oct 02 19:00:15 compute-0 sudo[91923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:15 compute-0 python3.9[91925]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:00:15 compute-0 sudo[91923]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:16 compute-0 sudo[92078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiwvpeizftnemvtsyvqwouiwclptycnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431616.1518593-338-231249926377690/AnsiballZ_command.py'
Oct 02 19:00:16 compute-0 sudo[92078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:16 compute-0 python3.9[92080]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:00:16 compute-0 ovs-vsctl[92081]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 02 19:00:16 compute-0 sudo[92078]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:17 compute-0 python3.9[92231]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:00:17 compute-0 sudo[92383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwlyaygsalwabcseuebaaijptxbhimog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431617.5763445-355-200715623162272/AnsiballZ_file.py'
Oct 02 19:00:17 compute-0 sudo[92383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:18 compute-0 python3.9[92385]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:00:18 compute-0 sudo[92383]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:18 compute-0 sudo[92535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgjlgxsahqurjyvusiyxsfvchpxtnhkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431618.2619925-363-148275456777949/AnsiballZ_stat.py'
Oct 02 19:00:18 compute-0 sudo[92535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:18 compute-0 python3.9[92537]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:18 compute-0 sudo[92535]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:19 compute-0 sudo[92613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krkugmxxhmadwyeeggqgerwrujuslrag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431618.2619925-363-148275456777949/AnsiballZ_file.py'
Oct 02 19:00:19 compute-0 sudo[92613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:19 compute-0 python3.9[92615]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:00:19 compute-0 sudo[92613]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:19 compute-0 sudo[92765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyrngeqewcvwgtzatiqwhyaaaliuixus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431619.5210333-363-108032212030568/AnsiballZ_stat.py'
Oct 02 19:00:19 compute-0 sudo[92765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:20 compute-0 python3.9[92767]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:20 compute-0 sudo[92765]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:20 compute-0 sudo[92843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyoesiwfuebkyczzhiegugtgnzoqmpkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431619.5210333-363-108032212030568/AnsiballZ_file.py'
Oct 02 19:00:20 compute-0 sudo[92843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:20 compute-0 python3.9[92845]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:00:20 compute-0 sudo[92843]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:20 compute-0 sudo[92995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abxcptqdjivajxcudqyjdocihjkncdvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431620.7325082-386-180344237494611/AnsiballZ_file.py'
Oct 02 19:00:20 compute-0 sudo[92995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:21 compute-0 python3.9[92997]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:21 compute-0 sudo[92995]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:21 compute-0 sudo[93147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kweggeyltewfpljpghcbdkndcgywxrwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431621.3465533-394-258261333663172/AnsiballZ_stat.py'
Oct 02 19:00:21 compute-0 sudo[93147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:21 compute-0 python3.9[93149]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:21 compute-0 sudo[93147]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:22 compute-0 sudo[93225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dskdtmudktvepbzvxsvyteuzkotirlxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431621.3465533-394-258261333663172/AnsiballZ_file.py'
Oct 02 19:00:22 compute-0 sudo[93225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:22 compute-0 python3.9[93227]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:22 compute-0 sudo[93225]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:22 compute-0 sudo[93377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwbjzyccoftsovvsfbomarbjhmejhike ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431622.5461118-406-246081062870645/AnsiballZ_stat.py'
Oct 02 19:00:22 compute-0 sudo[93377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:23 compute-0 python3.9[93379]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:23 compute-0 sudo[93377]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:23 compute-0 sudo[93455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aumgmhqnqovbhxgxlfgcnzuqtcgmlpwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431622.5461118-406-246081062870645/AnsiballZ_file.py'
Oct 02 19:00:23 compute-0 sudo[93455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:23 compute-0 python3.9[93457]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:23 compute-0 sudo[93455]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:24 compute-0 sudo[93607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udfkgdscrjaxkjgkgsixbilucxnrpxeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431623.730643-418-237730871852804/AnsiballZ_systemd.py'
Oct 02 19:00:24 compute-0 sudo[93607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:24 compute-0 python3.9[93609]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:00:24 compute-0 systemd[1]: Reloading.
Oct 02 19:00:24 compute-0 systemd-rc-local-generator[93638]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:00:24 compute-0 systemd-sysv-generator[93641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:00:24 compute-0 sudo[93607]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:25 compute-0 sudo[93797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhrhurzxfdxzmhgwsxmrjhoswnnnayxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431624.8951094-426-259242514235057/AnsiballZ_stat.py'
Oct 02 19:00:25 compute-0 sudo[93797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:25 compute-0 python3.9[93799]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:25 compute-0 sudo[93797]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:25 compute-0 sudo[93875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejtjprmxgvplugskmsjcpnqklafcicob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431624.8951094-426-259242514235057/AnsiballZ_file.py'
Oct 02 19:00:25 compute-0 sudo[93875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:25 compute-0 python3.9[93877]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:25 compute-0 sudo[93875]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:26 compute-0 sudo[94027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqzlwhcipuegomkjudigjygnbqhiwscn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431626.1506622-438-49231949038890/AnsiballZ_stat.py'
Oct 02 19:00:26 compute-0 sudo[94027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:26 compute-0 python3.9[94029]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:26 compute-0 sudo[94027]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:26 compute-0 sudo[94105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmggliiozmkswdnlmhsigzozskuqihto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431626.1506622-438-49231949038890/AnsiballZ_file.py'
Oct 02 19:00:26 compute-0 sudo[94105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:27 compute-0 python3.9[94107]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:27 compute-0 sudo[94105]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:27 compute-0 sudo[94257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utvgermxkzqvxforocakbpqfslrqpjjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431627.3721964-450-254273028725693/AnsiballZ_systemd.py'
Oct 02 19:00:27 compute-0 sudo[94257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:27 compute-0 python3.9[94259]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:00:28 compute-0 systemd[1]: Reloading.
Oct 02 19:00:28 compute-0 systemd-sysv-generator[94285]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:00:28 compute-0 systemd-rc-local-generator[94281]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:00:28 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 19:00:28 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 19:00:28 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 19:00:28 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 19:00:28 compute-0 sudo[94257]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:28 compute-0 sudo[94450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otiqtxooljtogqtkokhlfvrxwdxzrudx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431628.5766356-460-128352704354938/AnsiballZ_file.py'
Oct 02 19:00:28 compute-0 sudo[94450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:29 compute-0 python3.9[94452]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:00:29 compute-0 sudo[94450]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:29 compute-0 sudo[94602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fylaquaqphkcxfyhxckucrrtmjcxffou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431629.2621908-468-219102645844720/AnsiballZ_stat.py'
Oct 02 19:00:29 compute-0 sudo[94602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:29 compute-0 python3.9[94604]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:29 compute-0 sudo[94602]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:30 compute-0 sudo[94725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqteddisokbukpackkmarbtwjcnjvpzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431629.2621908-468-219102645844720/AnsiballZ_copy.py'
Oct 02 19:00:30 compute-0 sudo[94725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:30 compute-0 python3.9[94727]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431629.2621908-468-219102645844720/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:00:30 compute-0 sudo[94725]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:31 compute-0 sudo[94877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-conzelhsegtwfojvavzfqpdrrzjheylv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431630.8195298-485-10231175467242/AnsiballZ_file.py'
Oct 02 19:00:31 compute-0 sudo[94877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:31 compute-0 python3.9[94879]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:00:31 compute-0 sudo[94877]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:31 compute-0 sudo[95029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whmzkenxruimkeqxlojxnjqvwcbbvsjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431631.529988-493-46913511952240/AnsiballZ_stat.py'
Oct 02 19:00:31 compute-0 sudo[95029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:32 compute-0 python3.9[95031]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:00:32 compute-0 sudo[95029]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:32 compute-0 sudo[95152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbnqulslvrihjhcvagqntzlqsaqicexe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431631.529988-493-46913511952240/AnsiballZ_copy.py'
Oct 02 19:00:32 compute-0 sudo[95152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:32 compute-0 python3.9[95154]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431631.529988-493-46913511952240/.source.json _original_basename=.5wzzeqb0 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:32 compute-0 sudo[95152]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:33 compute-0 sudo[95304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqltallbpqquqvygtabaatxqrfjrycho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431632.7867503-508-158972945040008/AnsiballZ_file.py'
Oct 02 19:00:33 compute-0 sudo[95304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:33 compute-0 python3.9[95306]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:33 compute-0 sudo[95304]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:33 compute-0 sudo[95456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlxvbobwxslsbjeqztccmaumwvwiqfuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431633.5912306-516-55039908706788/AnsiballZ_stat.py'
Oct 02 19:00:33 compute-0 sudo[95456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:34 compute-0 sudo[95456]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:34 compute-0 sudo[95579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqrfeletzyveejzktzqwiuehqcxyfmye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431633.5912306-516-55039908706788/AnsiballZ_copy.py'
Oct 02 19:00:34 compute-0 sudo[95579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:34 compute-0 sudo[95579]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:35 compute-0 sudo[95731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oicmdlbeyphlguspbidsljvxqxkphbos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431635.2049232-533-273104688228129/AnsiballZ_container_config_data.py'
Oct 02 19:00:35 compute-0 sudo[95731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:36 compute-0 python3.9[95733]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 02 19:00:36 compute-0 sudo[95731]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:36 compute-0 sudo[95883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zazvdhhlknhodzvvusrqwsqkzaprxawi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431636.2722852-542-184505670826543/AnsiballZ_container_config_hash.py'
Oct 02 19:00:36 compute-0 sudo[95883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:37 compute-0 python3.9[95885]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:00:37 compute-0 sudo[95883]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:37 compute-0 sudo[96035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhmraqwhcxladkhdkyprreohvxavuzpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431637.3335137-551-230668970831196/AnsiballZ_podman_container_info.py'
Oct 02 19:00:37 compute-0 sudo[96035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:38 compute-0 python3.9[96037]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 19:00:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 19:00:38 compute-0 sudo[96035]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:39 compute-0 sudo[96198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyvtljejujezitrrginkwiyawlohizcg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431638.5890121-564-72051800368309/AnsiballZ_edpm_container_manage.py'
Oct 02 19:00:39 compute-0 sudo[96198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:39 compute-0 python3[96200]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:00:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 19:00:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 19:00:39 compute-0 podman[96237]: 2025-10-02 19:00:39.638237631 +0000 UTC m=+0.060390813 container create d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct 02 19:00:39 compute-0 podman[96237]: 2025-10-02 19:00:39.607459304 +0000 UTC m=+0.029612486 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 19:00:39 compute-0 python3[96200]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 19:00:39 compute-0 sudo[96198]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:40 compute-0 sudo[96424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnpdtbgqyewvmwqnxkhdubqbwaavexak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431640.025053-572-170069243163113/AnsiballZ_stat.py'
Oct 02 19:00:40 compute-0 sudo[96424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 19:00:40 compute-0 python3.9[96426]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:00:40 compute-0 sudo[96424]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:41 compute-0 sudo[96578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ognhmlyaattayxknfgkfvuzhngqtgown ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431640.9048705-581-148830874104740/AnsiballZ_file.py'
Oct 02 19:00:41 compute-0 sudo[96578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:41 compute-0 python3.9[96580]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:41 compute-0 sudo[96578]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:41 compute-0 sudo[96654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcsvqhscfbchmrxcxpfupwbggjvvgxqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431640.9048705-581-148830874104740/AnsiballZ_stat.py'
Oct 02 19:00:41 compute-0 sudo[96654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:41 compute-0 python3.9[96656]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:00:41 compute-0 sudo[96654]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:42 compute-0 sudo[96805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpqrppqjtgachkatsnuebohfmllozyhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431642.0174382-581-265478623373533/AnsiballZ_copy.py'
Oct 02 19:00:42 compute-0 sudo[96805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:42 compute-0 python3.9[96807]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759431642.0174382-581-265478623373533/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:00:42 compute-0 sudo[96805]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:43 compute-0 sudo[96881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jilqlakdqtraitpxkkuissofwhipduwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431642.0174382-581-265478623373533/AnsiballZ_systemd.py'
Oct 02 19:00:43 compute-0 sudo[96881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:43 compute-0 python3.9[96883]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:00:43 compute-0 systemd[1]: Reloading.
Oct 02 19:00:43 compute-0 systemd-rc-local-generator[96911]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:00:43 compute-0 systemd-sysv-generator[96914]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:00:43 compute-0 sudo[96881]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:44 compute-0 sudo[96993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmmjemwfuckvhkzhtxuipqgesiixeluw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431642.0174382-581-265478623373533/AnsiballZ_systemd.py'
Oct 02 19:00:44 compute-0 sudo[96993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:44 compute-0 python3.9[96995]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:00:44 compute-0 systemd[1]: Reloading.
Oct 02 19:00:44 compute-0 systemd-sysv-generator[97029]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:00:44 compute-0 systemd-rc-local-generator[97025]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:00:44 compute-0 systemd[1]: Starting ovn_controller container...
Oct 02 19:00:44 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 02 19:00:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fe4f018debc7dc6a5ce07dbcb9546c910201d02eb81d4b2418e178a68da62b/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 02 19:00:44 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2.
Oct 02 19:00:44 compute-0 podman[97036]: 2025-10-02 19:00:44.974419439 +0000 UTC m=+0.166998076 container init d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:00:44 compute-0 ovn_controller[97052]: + sudo -E kolla_set_configs
Oct 02 19:00:45 compute-0 podman[97036]: 2025-10-02 19:00:45.027669509 +0000 UTC m=+0.220248116 container start d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 19:00:45 compute-0 edpm-start-podman-container[97036]: ovn_controller
Oct 02 19:00:45 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 02 19:00:45 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 02 19:00:45 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 02 19:00:45 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 02 19:00:45 compute-0 systemd[97085]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 02 19:00:45 compute-0 edpm-start-podman-container[97035]: Creating additional drop-in dependency for "ovn_controller" (d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2)
Oct 02 19:00:45 compute-0 podman[97059]: 2025-10-02 19:00:45.12859153 +0000 UTC m=+0.086111674 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 19:00:45 compute-0 systemd[1]: d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2-1b0feb68ec843bd8.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:00:45 compute-0 systemd[1]: d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2-1b0feb68ec843bd8.service: Failed with result 'exit-code'.
Oct 02 19:00:45 compute-0 systemd[1]: Reloading.
Oct 02 19:00:45 compute-0 systemd-rc-local-generator[97130]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:00:45 compute-0 systemd-sysv-generator[97138]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:00:45 compute-0 systemd[97085]: Queued start job for default target Main User Target.
Oct 02 19:00:45 compute-0 systemd[97085]: Created slice User Application Slice.
Oct 02 19:00:45 compute-0 systemd[97085]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 02 19:00:45 compute-0 systemd[97085]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 19:00:45 compute-0 systemd[97085]: Reached target Paths.
Oct 02 19:00:45 compute-0 systemd[97085]: Reached target Timers.
Oct 02 19:00:45 compute-0 systemd[97085]: Starting D-Bus User Message Bus Socket...
Oct 02 19:00:45 compute-0 systemd[97085]: Starting Create User's Volatile Files and Directories...
Oct 02 19:00:45 compute-0 systemd[97085]: Listening on D-Bus User Message Bus Socket.
Oct 02 19:00:45 compute-0 systemd[97085]: Reached target Sockets.
Oct 02 19:00:45 compute-0 systemd[97085]: Finished Create User's Volatile Files and Directories.
Oct 02 19:00:45 compute-0 systemd[97085]: Reached target Basic System.
Oct 02 19:00:45 compute-0 systemd[97085]: Reached target Main User Target.
Oct 02 19:00:45 compute-0 systemd[97085]: Startup finished in 145ms.
Oct 02 19:00:45 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 02 19:00:45 compute-0 systemd[1]: Started ovn_controller container.
Oct 02 19:00:45 compute-0 systemd[1]: Started Session c1 of User root.
Oct 02 19:00:45 compute-0 sudo[96993]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:45 compute-0 ovn_controller[97052]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:00:45 compute-0 ovn_controller[97052]: INFO:__main__:Validating config file
Oct 02 19:00:45 compute-0 ovn_controller[97052]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:00:45 compute-0 ovn_controller[97052]: INFO:__main__:Writing out command to execute
Oct 02 19:00:45 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 02 19:00:45 compute-0 ovn_controller[97052]: ++ cat /run_command
Oct 02 19:00:45 compute-0 ovn_controller[97052]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 02 19:00:45 compute-0 ovn_controller[97052]: + ARGS=
Oct 02 19:00:45 compute-0 ovn_controller[97052]: + sudo kolla_copy_cacerts
Oct 02 19:00:45 compute-0 systemd[1]: Started Session c2 of User root.
Oct 02 19:00:45 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 02 19:00:45 compute-0 ovn_controller[97052]: + [[ ! -n '' ]]
Oct 02 19:00:45 compute-0 ovn_controller[97052]: + . kolla_extend_start
Oct 02 19:00:45 compute-0 ovn_controller[97052]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 02 19:00:45 compute-0 ovn_controller[97052]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 02 19:00:45 compute-0 ovn_controller[97052]: + umask 0022
Oct 02 19:00:45 compute-0 ovn_controller[97052]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 02 19:00:45 compute-0 NetworkManager[52324]: <info>  [1759431645.5377] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 02 19:00:45 compute-0 NetworkManager[52324]: <info>  [1759431645.5384] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 19:00:45 compute-0 NetworkManager[52324]: <info>  [1759431645.5394] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Oct 02 19:00:45 compute-0 NetworkManager[52324]: <info>  [1759431645.5399] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Oct 02 19:00:45 compute-0 NetworkManager[52324]: <info>  [1759431645.5402] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 19:00:45 compute-0 kernel: br-int: entered promiscuous mode
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00022|main|INFO|OVS feature set changed, force recompute.
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 02 19:00:45 compute-0 systemd-udevd[97185]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 19:00:45 compute-0 ovn_controller[97052]: 2025-10-02T19:00:45Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 19:00:45 compute-0 NetworkManager[52324]: <info>  [1759431645.5838] manager: (ovn-35904f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 02 19:00:45 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Oct 02 19:00:45 compute-0 NetworkManager[52324]: <info>  [1759431645.6024] device (genev_sys_6081): carrier: link connected
Oct 02 19:00:45 compute-0 NetworkManager[52324]: <info>  [1759431645.6027] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Oct 02 19:00:45 compute-0 sudo[97316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnasrryhueogklzvpikpggdvnommqsob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431645.6477115-609-19957730778398/AnsiballZ_command.py'
Oct 02 19:00:45 compute-0 sudo[97316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:46 compute-0 python3.9[97318]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:00:46 compute-0 ovs-vsctl[97319]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 02 19:00:46 compute-0 sudo[97316]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:46 compute-0 sudo[97469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkruzqazmccvjbokvhsmxpgoutlpaoik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431646.4411-617-31782341636650/AnsiballZ_command.py'
Oct 02 19:00:46 compute-0 sudo[97469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:46 compute-0 python3.9[97471]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:00:46 compute-0 ovs-vsctl[97473]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 02 19:00:47 compute-0 sudo[97469]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:47 compute-0 sudo[97624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqduguqlakbdslvkhdxydubyzwnegbyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431647.3356555-631-6587821330539/AnsiballZ_command.py'
Oct 02 19:00:47 compute-0 sudo[97624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:47 compute-0 python3.9[97626]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:00:47 compute-0 ovs-vsctl[97627]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 02 19:00:47 compute-0 sudo[97624]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:48 compute-0 sshd-session[86554]: Connection closed by 192.168.122.30 port 56634
Oct 02 19:00:48 compute-0 sshd-session[86551]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:00:48 compute-0 systemd-logind[798]: Session 20 logged out. Waiting for processes to exit.
Oct 02 19:00:48 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Oct 02 19:00:48 compute-0 systemd[1]: session-20.scope: Consumed 49.631s CPU time.
Oct 02 19:00:48 compute-0 systemd-logind[798]: Removed session 20.
Oct 02 19:00:54 compute-0 sshd-session[97652]: Accepted publickey for zuul from 192.168.122.30 port 43642 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 19:00:54 compute-0 systemd-logind[798]: New session 22 of user zuul.
Oct 02 19:00:54 compute-0 systemd[1]: Started Session 22 of User zuul.
Oct 02 19:00:54 compute-0 sshd-session[97652]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:00:55 compute-0 python3.9[97805]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:00:55 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 02 19:00:55 compute-0 systemd[97085]: Activating special unit Exit the Session...
Oct 02 19:00:55 compute-0 systemd[97085]: Stopped target Main User Target.
Oct 02 19:00:55 compute-0 systemd[97085]: Stopped target Basic System.
Oct 02 19:00:55 compute-0 systemd[97085]: Stopped target Paths.
Oct 02 19:00:55 compute-0 systemd[97085]: Stopped target Sockets.
Oct 02 19:00:55 compute-0 systemd[97085]: Stopped target Timers.
Oct 02 19:00:55 compute-0 systemd[97085]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 19:00:55 compute-0 systemd[97085]: Closed D-Bus User Message Bus Socket.
Oct 02 19:00:55 compute-0 systemd[97085]: Stopped Create User's Volatile Files and Directories.
Oct 02 19:00:55 compute-0 systemd[97085]: Removed slice User Application Slice.
Oct 02 19:00:55 compute-0 systemd[97085]: Reached target Shutdown.
Oct 02 19:00:55 compute-0 systemd[97085]: Finished Exit the Session.
Oct 02 19:00:55 compute-0 systemd[97085]: Reached target Exit the Session.
Oct 02 19:00:55 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 02 19:00:55 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 02 19:00:55 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 02 19:00:55 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 02 19:00:55 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 02 19:00:55 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 02 19:00:55 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 02 19:00:56 compute-0 sudo[97962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhbfvdxxkqzvrjfmxkjmqkrkgrmfdpct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431655.705478-34-101095433612358/AnsiballZ_file.py'
Oct 02 19:00:56 compute-0 sudo[97962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:56 compute-0 python3.9[97964]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:00:56 compute-0 sudo[97962]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:56 compute-0 sudo[98114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foxfaqzcioqaumdthkrkbqwmpgtfhqel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431656.5519428-34-196324742267020/AnsiballZ_file.py'
Oct 02 19:00:56 compute-0 sudo[98114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:57 compute-0 python3.9[98116]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:00:57 compute-0 sudo[98114]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:57 compute-0 sudo[98266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mucaygojbxiiqngexpgijszmwocbzokk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431657.2111542-34-212630722853754/AnsiballZ_file.py'
Oct 02 19:00:57 compute-0 sudo[98266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:57 compute-0 python3.9[98268]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:00:57 compute-0 sudo[98266]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:58 compute-0 sudo[98418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyopgshqtutlsvleildwwmdvfwohmwyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431657.8875532-34-52708293391674/AnsiballZ_file.py'
Oct 02 19:00:58 compute-0 sudo[98418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:58 compute-0 python3.9[98420]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:00:58 compute-0 sudo[98418]: pam_unix(sudo:session): session closed for user root
Oct 02 19:00:59 compute-0 sudo[98570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbvxopxbogmsllkwbkppsnehgysloeyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431658.6077864-34-161180325712084/AnsiballZ_file.py'
Oct 02 19:00:59 compute-0 sudo[98570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:00:59 compute-0 python3.9[98572]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:00:59 compute-0 sudo[98570]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:00 compute-0 python3.9[98722]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:01:00 compute-0 sudo[98872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebxpojfzyjgngkfdhiaqnowobwjnbpzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431660.352502-78-99495526224494/AnsiballZ_seboolean.py'
Oct 02 19:01:00 compute-0 sudo[98872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:01 compute-0 python3.9[98874]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 02 19:01:01 compute-0 CROND[98876]: (root) CMD (run-parts /etc/cron.hourly)
Oct 02 19:01:01 compute-0 run-parts[98879]: (/etc/cron.hourly) starting 0anacron
Oct 02 19:01:01 compute-0 anacron[98887]: Anacron started on 2025-10-02
Oct 02 19:01:01 compute-0 anacron[98887]: Will run job `cron.daily' in 44 min.
Oct 02 19:01:01 compute-0 anacron[98887]: Will run job `cron.weekly' in 64 min.
Oct 02 19:01:01 compute-0 anacron[98887]: Will run job `cron.monthly' in 84 min.
Oct 02 19:01:01 compute-0 anacron[98887]: Jobs will be executed sequentially
Oct 02 19:01:01 compute-0 run-parts[98889]: (/etc/cron.hourly) finished 0anacron
Oct 02 19:01:01 compute-0 CROND[98875]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 02 19:01:01 compute-0 sudo[98872]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:02 compute-0 python3.9[99039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:03 compute-0 python3.9[99160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431662.010176-86-255806719229884/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:04 compute-0 python3.9[99311]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:04 compute-0 python3.9[99432]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431663.8049634-101-277851543792741/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:05 compute-0 sudo[99582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noxenwmcnptzgmwthxliknxgdxuuhjip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431665.2817774-118-239156212530636/AnsiballZ_setup.py'
Oct 02 19:01:05 compute-0 sudo[99582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:05 compute-0 python3.9[99584]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:01:06 compute-0 sudo[99582]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:06 compute-0 sudo[99666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mudhmwxvhdfgtfpitabfggeypeaypqnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431665.2817774-118-239156212530636/AnsiballZ_dnf.py'
Oct 02 19:01:06 compute-0 sudo[99666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:06 compute-0 python3.9[99668]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:01:08 compute-0 sudo[99666]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:08 compute-0 sudo[99819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swaqolrkxswehbmvgmhjpszewiemchaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431668.3088908-130-232435449525635/AnsiballZ_systemd.py'
Oct 02 19:01:09 compute-0 sudo[99819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:09 compute-0 python3.9[99821]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 19:01:09 compute-0 sudo[99819]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:10 compute-0 python3.9[99974]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:10 compute-0 python3.9[100095]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431669.6241724-138-107693785999547/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:11 compute-0 python3.9[100245]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:12 compute-0 python3.9[100366]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431671.035308-138-11360452550624/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:13 compute-0 python3.9[100516]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:14 compute-0 python3.9[100637]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431673.0168817-182-66316686435801/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:14 compute-0 python3.9[100787]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:15 compute-0 ovn_controller[97052]: 2025-10-02T19:01:15Z|00025|memory|INFO|16128 kB peak resident set size after 29.9 seconds
Oct 02 19:01:15 compute-0 ovn_controller[97052]: 2025-10-02T19:01:15Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Oct 02 19:01:15 compute-0 podman[100882]: 2025-10-02 19:01:15.434963376 +0000 UTC m=+0.152496398 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:01:15 compute-0 python3.9[100918]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431674.2947052-182-97755979478304/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:16 compute-0 python3.9[101084]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:01:16 compute-0 sudo[101236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ducfrpjhiggtsjapsjwckssmgtgqgeeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431676.5412583-220-277494550226708/AnsiballZ_file.py'
Oct 02 19:01:16 compute-0 sudo[101236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:17 compute-0 python3.9[101238]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:17 compute-0 sudo[101236]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:17 compute-0 sudo[101388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmbdravemdieqggvbaatpztqawtwictd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431677.280669-228-256603246589635/AnsiballZ_stat.py'
Oct 02 19:01:17 compute-0 sudo[101388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:17 compute-0 python3.9[101390]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:17 compute-0 sudo[101388]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:18 compute-0 sudo[101466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnhscctriertdtraagbzlldlsvpdhpbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431677.280669-228-256603246589635/AnsiballZ_file.py'
Oct 02 19:01:18 compute-0 sudo[101466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:18 compute-0 python3.9[101468]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:18 compute-0 sudo[101466]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:18 compute-0 sudo[101618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfyfnqllbbauydgbbgbgmptefoaygbbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431678.474697-228-113183191421805/AnsiballZ_stat.py'
Oct 02 19:01:18 compute-0 sudo[101618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:18 compute-0 python3.9[101620]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:19 compute-0 sudo[101618]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:19 compute-0 sudo[101696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kowuynlozuvgmwoqtfkxnuhwubvutkyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431678.474697-228-113183191421805/AnsiballZ_file.py'
Oct 02 19:01:19 compute-0 sudo[101696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:19 compute-0 python3.9[101698]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:19 compute-0 sudo[101696]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:20 compute-0 sudo[101848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwdscndxvwflcnritniakymvfentndhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431679.935392-251-3647476674593/AnsiballZ_file.py'
Oct 02 19:01:20 compute-0 sudo[101848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:20 compute-0 python3.9[101850]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:20 compute-0 sudo[101848]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:20 compute-0 sudo[102000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqcucldmubdzzjineazcjxiufzimznaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431680.6332448-259-230439122679685/AnsiballZ_stat.py'
Oct 02 19:01:20 compute-0 sudo[102000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:21 compute-0 python3.9[102002]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:21 compute-0 sudo[102000]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:21 compute-0 sudo[102078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwlkcobcbfuvfueqytbvdunrionuxtwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431680.6332448-259-230439122679685/AnsiballZ_file.py'
Oct 02 19:01:21 compute-0 sudo[102078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:21 compute-0 python3.9[102080]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:21 compute-0 sudo[102078]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:22 compute-0 sudo[102230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxcqssodcgfegpzpeujwevjdrrrxewvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431681.7469447-271-276687347081052/AnsiballZ_stat.py'
Oct 02 19:01:22 compute-0 sudo[102230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:22 compute-0 python3.9[102232]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:22 compute-0 sudo[102230]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:22 compute-0 sudo[102308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmhrwimqtrvbplwqiwfgtkvslknwzkpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431681.7469447-271-276687347081052/AnsiballZ_file.py'
Oct 02 19:01:22 compute-0 sudo[102308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:22 compute-0 python3.9[102310]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:22 compute-0 sudo[102308]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:23 compute-0 sudo[102460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drprqiambtdcuzqixgzhyibsdeybodti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431682.8921216-283-168216145102206/AnsiballZ_systemd.py'
Oct 02 19:01:23 compute-0 sudo[102460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:23 compute-0 python3.9[102462]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:01:23 compute-0 systemd[1]: Reloading.
Oct 02 19:01:23 compute-0 systemd-rc-local-generator[102486]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:01:23 compute-0 systemd-sysv-generator[102490]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:01:23 compute-0 sudo[102460]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:24 compute-0 sudo[102649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwdbkodzuofbxduvwnzomqaxqnfzkdkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431684.0387392-291-239894137842708/AnsiballZ_stat.py'
Oct 02 19:01:24 compute-0 sudo[102649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:24 compute-0 python3.9[102651]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:24 compute-0 sudo[102649]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:24 compute-0 sudo[102727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxyudctogbdsldbvclogdhwztgxlorms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431684.0387392-291-239894137842708/AnsiballZ_file.py'
Oct 02 19:01:24 compute-0 sudo[102727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:25 compute-0 python3.9[102729]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:25 compute-0 sudo[102727]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:25 compute-0 sudo[102879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhlglzlcczvzxrvpvczkhwdpijhjcmfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431685.3541431-303-30480603772722/AnsiballZ_stat.py'
Oct 02 19:01:25 compute-0 sudo[102879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:25 compute-0 python3.9[102881]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:25 compute-0 sudo[102879]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:26 compute-0 sudo[102957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgyyyszexstqjhzhmyolhjdidqdlcfpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431685.3541431-303-30480603772722/AnsiballZ_file.py'
Oct 02 19:01:26 compute-0 sudo[102957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:26 compute-0 python3.9[102959]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:26 compute-0 sudo[102957]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:27 compute-0 sudo[103109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pydbbhvshlvwrwjucnddapwmnkfieenf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431686.6997266-315-257062476693014/AnsiballZ_systemd.py'
Oct 02 19:01:27 compute-0 sudo[103109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:27 compute-0 python3.9[103111]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:01:27 compute-0 systemd[1]: Reloading.
Oct 02 19:01:27 compute-0 systemd-rc-local-generator[103138]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:01:27 compute-0 systemd-sysv-generator[103142]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:01:27 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 19:01:27 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 19:01:27 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 19:01:27 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 19:01:27 compute-0 sudo[103109]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:28 compute-0 sudo[103303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khcabwrggoggpbkyndptbluojbkvpaff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431687.8886023-325-246413026066434/AnsiballZ_file.py'
Oct 02 19:01:28 compute-0 sudo[103303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:28 compute-0 python3.9[103305]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:28 compute-0 sudo[103303]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:28 compute-0 sudo[103455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qddyjgysawacvpdazgvgztnzvsuyzbvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431688.5752459-333-90036003814704/AnsiballZ_stat.py'
Oct 02 19:01:28 compute-0 sudo[103455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:29 compute-0 python3.9[103457]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:29 compute-0 sudo[103455]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:29 compute-0 sudo[103578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzblqrjwnreaqtujrwkiqcbfpjzgrbtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431688.5752459-333-90036003814704/AnsiballZ_copy.py'
Oct 02 19:01:29 compute-0 sudo[103578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:29 compute-0 python3.9[103580]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759431688.5752459-333-90036003814704/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:29 compute-0 sudo[103578]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:30 compute-0 sudo[103730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anmnjprzxxsplukuckwuueaezqfnymud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431690.0779517-350-274830580753452/AnsiballZ_file.py'
Oct 02 19:01:30 compute-0 sudo[103730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:30 compute-0 python3.9[103732]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:01:30 compute-0 sudo[103730]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:31 compute-0 sudo[103882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wucuoorclbereuosczxeanfebkmtvodm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431690.8765972-358-111527443752530/AnsiballZ_stat.py'
Oct 02 19:01:31 compute-0 sudo[103882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:31 compute-0 python3.9[103884]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:01:31 compute-0 sudo[103882]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:31 compute-0 sudo[104005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgktxojhaxiispzsiymojxkywgczqquo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431690.8765972-358-111527443752530/AnsiballZ_copy.py'
Oct 02 19:01:31 compute-0 sudo[104005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:32 compute-0 python3.9[104007]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431690.8765972-358-111527443752530/.source.json _original_basename=.a0m6otca follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:32 compute-0 sudo[104005]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:32 compute-0 sudo[104157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfdyibxyjwsswbjabwsvhdslkkjniyto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431692.221798-373-222120883465283/AnsiballZ_file.py'
Oct 02 19:01:32 compute-0 sudo[104157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:32 compute-0 python3.9[104159]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:32 compute-0 sudo[104157]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:33 compute-0 sudo[104309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhrjjvmowxkszgslyvrrwdrzzpmgnvrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431693.0675538-381-22050703063994/AnsiballZ_stat.py'
Oct 02 19:01:33 compute-0 sudo[104309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:33 compute-0 sudo[104309]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:33 compute-0 sudo[104432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfgowfjomiedlqolcskrbszjvxbyznpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431693.0675538-381-22050703063994/AnsiballZ_copy.py'
Oct 02 19:01:33 compute-0 sudo[104432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:34 compute-0 sudo[104432]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:35 compute-0 sudo[104584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlpegmwfyikjcejpelcmxypqdujvdzjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431694.537869-398-165857413160465/AnsiballZ_container_config_data.py'
Oct 02 19:01:35 compute-0 sudo[104584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:35 compute-0 python3.9[104586]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 02 19:01:35 compute-0 sudo[104584]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:36 compute-0 sudo[104736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glcisjosqcadotkrmdthiysxwgcuyikf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431695.4960854-407-240456054541622/AnsiballZ_container_config_hash.py'
Oct 02 19:01:36 compute-0 sudo[104736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:36 compute-0 python3.9[104738]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:01:36 compute-0 sudo[104736]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:37 compute-0 sudo[104888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdqzfdkeorrvcyzauifesufepmmmlsdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431696.5698292-416-218250541912352/AnsiballZ_podman_container_info.py'
Oct 02 19:01:37 compute-0 sudo[104888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:37 compute-0 python3.9[104890]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 19:01:37 compute-0 sudo[104888]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:38 compute-0 sudo[105066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmscuhaskcbybsoyakpyddvaxbjboofc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431698.0453036-429-133236492366363/AnsiballZ_edpm_container_manage.py'
Oct 02 19:01:38 compute-0 sudo[105066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:38 compute-0 python3[105068]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:01:39 compute-0 podman[105104]: 2025-10-02 19:01:39.159682676 +0000 UTC m=+0.080212617 container create 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Oct 02 19:01:39 compute-0 podman[105104]: 2025-10-02 19:01:39.123934725 +0000 UTC m=+0.044464746 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:01:39 compute-0 python3[105068]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:01:39 compute-0 sudo[105066]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:39 compute-0 sudo[105291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqhapzgvhqpjwqibxxgnocjimryvjfko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431699.574355-437-197841416149173/AnsiballZ_stat.py'
Oct 02 19:01:39 compute-0 sudo[105291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:40 compute-0 python3.9[105293]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:01:40 compute-0 sudo[105291]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:40 compute-0 sudo[105445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfziopspvqtqpgzufxtplmactyccqjkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431700.534619-446-97612405496983/AnsiballZ_file.py'
Oct 02 19:01:40 compute-0 sudo[105445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:41 compute-0 python3.9[105447]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:41 compute-0 sudo[105445]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:41 compute-0 sudo[105521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmdtqqvclabczwzdvsvmiswkaxpanwhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431700.534619-446-97612405496983/AnsiballZ_stat.py'
Oct 02 19:01:41 compute-0 sudo[105521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:41 compute-0 python3.9[105523]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:01:41 compute-0 sudo[105521]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:42 compute-0 sudo[105672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbervtozbfvdwwubdlvzfrpxmgbfreut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431701.7150505-446-186717529064837/AnsiballZ_copy.py'
Oct 02 19:01:42 compute-0 sudo[105672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:42 compute-0 python3.9[105674]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759431701.7150505-446-186717529064837/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:01:42 compute-0 sudo[105672]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:42 compute-0 sudo[105748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lotqxdfjoyugxzslhmvyablniroadstr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431701.7150505-446-186717529064837/AnsiballZ_systemd.py'
Oct 02 19:01:42 compute-0 sudo[105748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:43 compute-0 python3.9[105750]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:01:43 compute-0 systemd[1]: Reloading.
Oct 02 19:01:43 compute-0 systemd-rc-local-generator[105780]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:01:43 compute-0 systemd-sysv-generator[105784]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:01:43 compute-0 sudo[105748]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:43 compute-0 sudo[105860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghqsoivwyphkjmsxvywzxonitfbbvoug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431701.7150505-446-186717529064837/AnsiballZ_systemd.py'
Oct 02 19:01:43 compute-0 sudo[105860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:43 compute-0 python3.9[105862]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:01:45 compute-0 systemd[1]: Reloading.
Oct 02 19:01:45 compute-0 systemd-rc-local-generator[105892]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:01:45 compute-0 systemd-sysv-generator[105898]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:01:45 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Oct 02 19:01:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:01:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cffc744119955e9782fe5f81d3b42d354b495328a7454d3560b4179c03764c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 02 19:01:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cffc744119955e9782fe5f81d3b42d354b495328a7454d3560b4179c03764c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 19:01:45 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d.
Oct 02 19:01:45 compute-0 podman[105903]: 2025-10-02 19:01:45.566066767 +0000 UTC m=+0.176812094 container init 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: + sudo -E kolla_set_configs
Oct 02 19:01:45 compute-0 podman[105903]: 2025-10-02 19:01:45.591791049 +0000 UTC m=+0.202536356 container start 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 02 19:01:45 compute-0 edpm-start-podman-container[105903]: ovn_metadata_agent
Oct 02 19:01:45 compute-0 podman[105915]: 2025-10-02 19:01:45.608317675 +0000 UTC m=+0.119700714 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Validating config file
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Copying service configuration files
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Writing out command to execute
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: ++ cat /run_command
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: + CMD=neutron-ovn-metadata-agent
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: + ARGS=
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: + sudo kolla_copy_cacerts
Oct 02 19:01:45 compute-0 edpm-start-podman-container[105902]: Creating additional drop-in dependency for "ovn_metadata_agent" (40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d)
Oct 02 19:01:45 compute-0 podman[105946]: 2025-10-02 19:01:45.672262621 +0000 UTC m=+0.067117159 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: + [[ ! -n '' ]]
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: + . kolla_extend_start
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: Running command: 'neutron-ovn-metadata-agent'
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: + umask 0022
Oct 02 19:01:45 compute-0 ovn_metadata_agent[105919]: + exec neutron-ovn-metadata-agent
Oct 02 19:01:45 compute-0 systemd[1]: Reloading.
Oct 02 19:01:45 compute-0 systemd-rc-local-generator[106020]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:01:45 compute-0 systemd-sysv-generator[106023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:01:45 compute-0 systemd[1]: Started ovn_metadata_agent container.
Oct 02 19:01:45 compute-0 sudo[105860]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:46 compute-0 sshd-session[97655]: Connection closed by 192.168.122.30 port 43642
Oct 02 19:01:46 compute-0 sshd-session[97652]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:01:46 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Oct 02 19:01:46 compute-0 systemd[1]: session-22.scope: Consumed 37.984s CPU time.
Oct 02 19:01:46 compute-0 systemd-logind[798]: Session 22 logged out. Waiting for processes to exit.
Oct 02 19:01:46 compute-0 systemd-logind[798]: Removed session 22.
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.391 105943 INFO neutron.common.config [-] Logging enabled!
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.391 105943 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.391 105943 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.392 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.392 105943 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.392 105943 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.392 105943 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.392 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.392 105943 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.393 105943 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.393 105943 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.393 105943 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.393 105943 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.393 105943 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.393 105943 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.393 105943 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.393 105943 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.393 105943 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.394 105943 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.394 105943 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.394 105943 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.394 105943 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.394 105943 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.394 105943 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.394 105943 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.394 105943 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.394 105943 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.394 105943 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.395 105943 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.395 105943 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.395 105943 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.395 105943 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.395 105943 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.395 105943 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.395 105943 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.395 105943 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.395 105943 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.396 105943 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.396 105943 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.396 105943 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.396 105943 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.396 105943 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.396 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.396 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.396 105943 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.396 105943 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.396 105943 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.397 105943 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.397 105943 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.397 105943 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.397 105943 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.397 105943 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.397 105943 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.397 105943 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.397 105943 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.397 105943 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.397 105943 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.398 105943 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.398 105943 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.398 105943 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.398 105943 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.398 105943 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.398 105943 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.398 105943 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.398 105943 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.398 105943 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.399 105943 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.399 105943 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.399 105943 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.399 105943 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.399 105943 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.399 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.399 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.399 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.399 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.400 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.400 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.400 105943 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.400 105943 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.400 105943 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.400 105943 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.400 105943 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.400 105943 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.401 105943 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.401 105943 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.401 105943 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.401 105943 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.401 105943 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.401 105943 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.401 105943 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.401 105943 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.401 105943 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.402 105943 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.402 105943 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.402 105943 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.402 105943 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.402 105943 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.402 105943 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.402 105943 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.402 105943 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.402 105943 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.402 105943 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.402 105943 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.403 105943 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.403 105943 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.403 105943 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.403 105943 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.403 105943 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.403 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.403 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.403 105943 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.403 105943 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.403 105943 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.404 105943 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.404 105943 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.404 105943 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.404 105943 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.404 105943 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.404 105943 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.404 105943 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.404 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.404 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.405 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.405 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.405 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.405 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.405 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.405 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.405 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.405 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.405 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.405 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.406 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.406 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.406 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.406 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.406 105943 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.406 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.406 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.406 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.406 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.407 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.407 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.407 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.407 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.407 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.407 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.407 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.407 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.407 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.407 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.408 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.408 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.408 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.408 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.408 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.408 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.408 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.408 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.408 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.408 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.409 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.409 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.409 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.409 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.409 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.409 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.409 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.409 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.409 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.409 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.410 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.410 105943 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.410 105943 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.410 105943 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.410 105943 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.410 105943 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.410 105943 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.410 105943 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.410 105943 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.410 105943 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.411 105943 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.411 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.411 105943 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.411 105943 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.411 105943 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.411 105943 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.411 105943 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.411 105943 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.411 105943 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.412 105943 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.412 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.412 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.412 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.412 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.412 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.412 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.412 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.412 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.413 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.413 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.413 105943 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.413 105943 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.413 105943 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.413 105943 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.413 105943 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.413 105943 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.413 105943 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.413 105943 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.414 105943 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.414 105943 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.414 105943 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.414 105943 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.414 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.414 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.414 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.414 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.414 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.414 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.415 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.415 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.415 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.415 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.415 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.415 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.415 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.415 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.415 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.416 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.416 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.416 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.416 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.416 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.416 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.416 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.416 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.416 105943 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.416 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.417 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.417 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.417 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.417 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.417 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.417 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.417 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.417 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.417 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.418 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.418 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.418 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.418 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.418 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.418 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.418 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.418 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.418 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.418 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.419 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.419 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.419 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.419 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.419 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.419 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.419 105943 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.419 105943 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.419 105943 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.420 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.420 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.420 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.420 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.420 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.420 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.420 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.420 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.420 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.420 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.421 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.421 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.421 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.421 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.421 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.421 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.421 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.421 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.421 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.422 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.422 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.422 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.422 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.422 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.422 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.422 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.422 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.422 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.422 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.423 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.423 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.423 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.423 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.423 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.423 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.423 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.423 105943 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.423 105943 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.431 105943 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.431 105943 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.432 105943 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.432 105943 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.432 105943 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.445 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name bbab9e90-4b9d-4a75-81b6-ad2c1de412c6 (UUID: bbab9e90-4b9d-4a75-81b6-ad2c1de412c6) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.481 105943 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.481 105943 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.481 105943 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.482 105943 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.485 105943 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.493 105943 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.499 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'bbab9e90-4b9d-4a75-81b6-ad2c1de412c6'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], external_ids={}, name=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, nb_cfg_timestamp=1759431653562, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.500 105943 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fe0d77c20a0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.501 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.501 105943 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.502 105943 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.502 105943 INFO oslo_service.service [-] Starting 1 workers
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.507 105943 DEBUG oslo_service.service [-] Started child 106055 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.511 105943 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp8y5ts_xq/privsep.sock']
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.513 106055 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-915740'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.552 106055 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.553 106055 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.553 106055 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.558 106055 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.567 106055 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 19:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:47.577 106055 INFO eventlet.wsgi.server [-] (106055) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 02 19:01:48 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 02 19:01:48 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:48.184 105943 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 19:01:48 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:48.185 105943 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp8y5ts_xq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 19:01:48 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:48.047 106060 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 19:01:48 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:48.051 106060 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 19:01:48 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:48.053 106060 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 02 19:01:48 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:48.053 106060 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106060
Oct 02 19:01:48 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:48.189 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[59872edc-f72b-4d32-9e37-6d9cd23d8494]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:01:48 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:48.660 106060 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:01:48 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:48.660 106060 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:01:48 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:48.661 106060 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.170 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[27b77f45-769c-4ce3-b0b6-d7439469fc8e]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.173 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, column=external_ids, values=({'neutron:ovn-metadata-id': 'b325207b-4629-57f3-9e20-3ca80fbce58f'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.255 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.298 105943 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.298 105943 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.299 105943 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.299 105943 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.299 105943 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.299 105943 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.300 105943 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.300 105943 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.300 105943 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.301 105943 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.301 105943 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.301 105943 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.301 105943 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.302 105943 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.302 105943 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.302 105943 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.303 105943 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.303 105943 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.303 105943 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.303 105943 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.304 105943 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.304 105943 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.304 105943 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.304 105943 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.305 105943 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.305 105943 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.305 105943 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.306 105943 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.306 105943 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.306 105943 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.307 105943 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.307 105943 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.307 105943 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.308 105943 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.308 105943 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.308 105943 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.309 105943 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.309 105943 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.310 105943 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.310 105943 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.310 105943 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.311 105943 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.311 105943 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.311 105943 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.312 105943 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.312 105943 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.312 105943 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.313 105943 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.313 105943 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.313 105943 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.314 105943 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.314 105943 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.314 105943 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.314 105943 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.314 105943 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.315 105943 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.315 105943 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.315 105943 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.315 105943 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.316 105943 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.316 105943 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.316 105943 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.317 105943 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.317 105943 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.317 105943 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.317 105943 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.318 105943 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.318 105943 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.318 105943 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.318 105943 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.319 105943 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.319 105943 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.319 105943 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.319 105943 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.320 105943 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.320 105943 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.320 105943 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.320 105943 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.321 105943 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.321 105943 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.321 105943 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.321 105943 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.322 105943 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.322 105943 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.322 105943 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.322 105943 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.323 105943 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.323 105943 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.323 105943 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.323 105943 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.324 105943 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.324 105943 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.324 105943 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.324 105943 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.325 105943 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.325 105943 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.325 105943 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.325 105943 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.325 105943 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.326 105943 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.326 105943 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.326 105943 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.326 105943 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.327 105943 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.327 105943 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.327 105943 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.327 105943 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.327 105943 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.328 105943 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.328 105943 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.328 105943 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.329 105943 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.329 105943 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.329 105943 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.329 105943 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.330 105943 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.330 105943 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.330 105943 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.330 105943 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.331 105943 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.331 105943 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.331 105943 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.332 105943 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.332 105943 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.332 105943 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.333 105943 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.333 105943 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.333 105943 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.334 105943 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.334 105943 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.334 105943 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.335 105943 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.335 105943 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.335 105943 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.335 105943 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.336 105943 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.336 105943 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.336 105943 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.337 105943 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.337 105943 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.338 105943 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.338 105943 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.339 105943 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.339 105943 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.339 105943 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.339 105943 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.340 105943 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.340 105943 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.340 105943 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.340 105943 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.340 105943 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.341 105943 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.341 105943 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.341 105943 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.342 105943 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.342 105943 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.342 105943 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.343 105943 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.343 105943 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.343 105943 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.343 105943 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.343 105943 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.344 105943 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.344 105943 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.344 105943 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.344 105943 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.344 105943 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.344 105943 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.344 105943 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.345 105943 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.345 105943 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.345 105943 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.345 105943 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.345 105943 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.345 105943 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.345 105943 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.346 105943 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.346 105943 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.346 105943 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.346 105943 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.346 105943 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.346 105943 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.346 105943 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.347 105943 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.347 105943 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.347 105943 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.347 105943 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.347 105943 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.347 105943 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.347 105943 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.348 105943 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.348 105943 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.348 105943 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.348 105943 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.348 105943 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.348 105943 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.348 105943 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.349 105943 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.349 105943 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.349 105943 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.349 105943 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.349 105943 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.349 105943 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.349 105943 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.350 105943 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.350 105943 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.350 105943 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.350 105943 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.350 105943 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.350 105943 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.350 105943 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.351 105943 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.351 105943 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.351 105943 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.351 105943 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.351 105943 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.351 105943 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.351 105943 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.352 105943 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.352 105943 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.352 105943 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.352 105943 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.352 105943 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.352 105943 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.352 105943 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.353 105943 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.353 105943 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.353 105943 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.353 105943 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.353 105943 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.353 105943 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.353 105943 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.354 105943 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.354 105943 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.354 105943 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.354 105943 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.354 105943 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.354 105943 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.354 105943 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.355 105943 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.355 105943 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.355 105943 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.355 105943 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.355 105943 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.355 105943 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.356 105943 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.356 105943 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.356 105943 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.356 105943 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.356 105943 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.356 105943 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.356 105943 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.357 105943 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.357 105943 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.357 105943 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.357 105943 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.357 105943 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.357 105943 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.357 105943 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.358 105943 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.358 105943 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.358 105943 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.358 105943 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.358 105943 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.358 105943 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.358 105943 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.359 105943 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.359 105943 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.359 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.359 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.359 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.359 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.359 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.360 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.360 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.360 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.360 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.360 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.360 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.360 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.361 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.361 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.361 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.361 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.361 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.361 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.361 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.362 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.362 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.362 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.362 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.362 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.362 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.362 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.363 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.363 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.363 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.363 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.363 105943 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.363 105943 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.364 105943 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.364 105943 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.364 105943 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:01:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:01:49.364 105943 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:01:52 compute-0 sshd-session[106065]: Accepted publickey for zuul from 192.168.122.30 port 42568 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 19:01:52 compute-0 systemd-logind[798]: New session 23 of user zuul.
Oct 02 19:01:52 compute-0 systemd[1]: Started Session 23 of User zuul.
Oct 02 19:01:52 compute-0 sshd-session[106065]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:01:53 compute-0 python3.9[106218]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:01:54 compute-0 sudo[106372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eogfnixwejddheuaekrujnevigbshzzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431713.988237-34-204036658559462/AnsiballZ_command.py'
Oct 02 19:01:54 compute-0 sudo[106372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:54 compute-0 python3.9[106374]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:01:54 compute-0 sudo[106372]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:55 compute-0 sudo[106537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veleievezponppxbjkipkpupplxumzsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431715.1879935-45-223271594221774/AnsiballZ_systemd_service.py'
Oct 02 19:01:55 compute-0 sudo[106537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:01:56 compute-0 python3.9[106539]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:01:56 compute-0 systemd[1]: Reloading.
Oct 02 19:01:56 compute-0 systemd-sysv-generator[106568]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:01:56 compute-0 systemd-rc-local-generator[106562]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:01:56 compute-0 sudo[106537]: pam_unix(sudo:session): session closed for user root
Oct 02 19:01:57 compute-0 python3.9[106724]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:01:57 compute-0 network[106741]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:01:57 compute-0 network[106742]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:01:57 compute-0 network[106743]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:02:02 compute-0 sudo[107005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brqchouqpatpxludrmdyxxixgmfkadfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431721.834837-64-9399501257747/AnsiballZ_systemd_service.py'
Oct 02 19:02:02 compute-0 sudo[107005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:02 compute-0 python3.9[107007]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:02:02 compute-0 sudo[107005]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:02 compute-0 sudo[107158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-banmrzsehqbxqvacdkvqyjyulemgtkey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431722.600136-64-154391278117888/AnsiballZ_systemd_service.py'
Oct 02 19:02:02 compute-0 sudo[107158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:03 compute-0 python3.9[107160]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:02:03 compute-0 sudo[107158]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:03 compute-0 sudo[107311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfaclkmpvtabpglverrrmzvuoyvqldle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431723.4017444-64-99675069971685/AnsiballZ_systemd_service.py'
Oct 02 19:02:03 compute-0 sudo[107311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:03 compute-0 python3.9[107313]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:02:04 compute-0 sudo[107311]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:04 compute-0 sudo[107464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtaxvpnkgiqszuwjfyczrgyedvtxrjxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431724.171589-64-209146591027766/AnsiballZ_systemd_service.py'
Oct 02 19:02:04 compute-0 sudo[107464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:04 compute-0 python3.9[107466]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:02:04 compute-0 sudo[107464]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:05 compute-0 sudo[107617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkfkykwhtuccqquqxgmfcinjfvakpegj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431725.093994-64-137580033402523/AnsiballZ_systemd_service.py'
Oct 02 19:02:05 compute-0 sudo[107617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:05 compute-0 python3.9[107619]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:02:06 compute-0 sudo[107617]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:07 compute-0 sudo[107770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcjgozneogxizyxxsmmexvakvtkbropi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431726.9925022-64-263221614414405/AnsiballZ_systemd_service.py'
Oct 02 19:02:07 compute-0 sudo[107770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:07 compute-0 python3.9[107772]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:02:07 compute-0 sudo[107770]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:08 compute-0 sudo[107923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzybdvpyhmycnbyenukkpmwqibpdwkgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431727.9197645-64-12003978722694/AnsiballZ_systemd_service.py'
Oct 02 19:02:08 compute-0 sudo[107923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:08 compute-0 python3.9[107925]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:02:08 compute-0 sudo[107923]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:09 compute-0 sudo[108076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytlqyotxklknetqqqazfmaekpglvemwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431729.138541-116-11165542857282/AnsiballZ_file.py'
Oct 02 19:02:09 compute-0 sudo[108076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:09 compute-0 python3.9[108078]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:09 compute-0 sudo[108076]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:10 compute-0 sudo[108228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfjuhzjvamrvrseukwuiwcmxqasbayfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431729.962709-116-265202488517784/AnsiballZ_file.py'
Oct 02 19:02:10 compute-0 sudo[108228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:10 compute-0 python3.9[108230]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:10 compute-0 sudo[108228]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:10 compute-0 sudo[108380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilbkqbcdgvzonnrvmskicvhnnihsaqyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431730.6427224-116-112575937180186/AnsiballZ_file.py'
Oct 02 19:02:10 compute-0 sudo[108380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:11 compute-0 python3.9[108382]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:11 compute-0 sudo[108380]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:11 compute-0 sudo[108532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvqqzpdgalltatnugextkhfejblrhtfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431731.3106954-116-176964768356860/AnsiballZ_file.py'
Oct 02 19:02:11 compute-0 sudo[108532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:11 compute-0 python3.9[108534]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:11 compute-0 sudo[108532]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:12 compute-0 sudo[108684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzzpweecebacmfqqvgiqsxmtshmybwaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431732.0897112-116-33524859308287/AnsiballZ_file.py'
Oct 02 19:02:12 compute-0 sudo[108684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:12 compute-0 python3.9[108686]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:12 compute-0 sudo[108684]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:13 compute-0 sudo[108836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiagqvinyxwowqewifduxglrynlybjvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431732.7977467-116-158347538597902/AnsiballZ_file.py'
Oct 02 19:02:13 compute-0 sudo[108836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:13 compute-0 python3.9[108838]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:13 compute-0 sudo[108836]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:13 compute-0 sudo[108988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcttygbrmdeysiadgatioajdexeuaays ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431733.564036-116-154497048522646/AnsiballZ_file.py'
Oct 02 19:02:13 compute-0 sudo[108988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:14 compute-0 python3.9[108990]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:14 compute-0 sudo[108988]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:14 compute-0 sudo[109140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqaitbrdftxtcqhmbwujxlsogtbfmihx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431734.3250475-166-5428334206263/AnsiballZ_file.py'
Oct 02 19:02:14 compute-0 sudo[109140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:14 compute-0 python3.9[109142]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:14 compute-0 sudo[109140]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:15 compute-0 sudo[109292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfvshotmxfcckuxucwuxyxmopxijmscr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431735.0335896-166-94767879060935/AnsiballZ_file.py'
Oct 02 19:02:15 compute-0 sudo[109292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:15 compute-0 python3.9[109294]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:15 compute-0 sudo[109292]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:16 compute-0 sudo[109472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twfmvahdszcbduftctntrzlogougmpot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431735.7294648-166-212248729881577/AnsiballZ_file.py'
Oct 02 19:02:16 compute-0 sudo[109472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:16 compute-0 podman[109418]: 2025-10-02 19:02:16.138213504 +0000 UTC m=+0.071824390 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 19:02:16 compute-0 podman[109419]: 2025-10-02 19:02:16.180984236 +0000 UTC m=+0.120429672 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 02 19:02:16 compute-0 python3.9[109483]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:16 compute-0 sudo[109472]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:16 compute-0 sudo[109639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypgpilixajfxswruvigzmblfhsgbgnrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431736.5021539-166-230136229124371/AnsiballZ_file.py'
Oct 02 19:02:16 compute-0 sudo[109639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:17 compute-0 python3.9[109641]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:17 compute-0 sudo[109639]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:17 compute-0 sudo[109791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xodhswkwwodtqugybgltrurttiqysrqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431737.2426183-166-200644452096912/AnsiballZ_file.py'
Oct 02 19:02:17 compute-0 sudo[109791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:17 compute-0 python3.9[109793]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:17 compute-0 sudo[109791]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:18 compute-0 sudo[109943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoogfiuklilebcjbeslavnzokjnwphxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431737.9473565-166-235277171348239/AnsiballZ_file.py'
Oct 02 19:02:18 compute-0 sudo[109943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:18 compute-0 python3.9[109945]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:18 compute-0 sudo[109943]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:18 compute-0 sudo[110095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgsrvbgytocwtdflhmskocezjtipswmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431738.5662048-166-173087877355091/AnsiballZ_file.py'
Oct 02 19:02:18 compute-0 sudo[110095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:19 compute-0 python3.9[110097]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:02:19 compute-0 sudo[110095]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:19 compute-0 sudo[110247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwtqndmzrykimppvwocypfnmeevjcnot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431739.4338295-217-189774223789523/AnsiballZ_command.py'
Oct 02 19:02:19 compute-0 sudo[110247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:19 compute-0 python3.9[110249]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:02:19 compute-0 sudo[110247]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:20 compute-0 python3.9[110401]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:02:21 compute-0 sudo[110551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stvpqvmbwwboqphavpbfqockhxwmtwkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431741.0899892-235-100304861619874/AnsiballZ_systemd_service.py'
Oct 02 19:02:21 compute-0 sudo[110551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:21 compute-0 python3.9[110553]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:02:21 compute-0 systemd[1]: Reloading.
Oct 02 19:02:21 compute-0 systemd-sysv-generator[110583]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:02:21 compute-0 systemd-rc-local-generator[110580]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:02:21 compute-0 sudo[110551]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:22 compute-0 sudo[110738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvfsuxsaszbgofgggqndgihdbtscpawp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431742.0516722-243-91705972249367/AnsiballZ_command.py'
Oct 02 19:02:22 compute-0 sudo[110738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:22 compute-0 python3.9[110740]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:02:22 compute-0 sudo[110738]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:22 compute-0 sudo[110891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjysgbbmfombhwhdcvuphecurclhbthv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431742.6444664-243-226818092560674/AnsiballZ_command.py'
Oct 02 19:02:22 compute-0 sudo[110891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:23 compute-0 python3.9[110893]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:02:23 compute-0 sudo[110891]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:23 compute-0 sudo[111044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwbfejuilwiwgorhozffmfmnypouaniq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431743.326056-243-13204915162458/AnsiballZ_command.py'
Oct 02 19:02:23 compute-0 sudo[111044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:23 compute-0 python3.9[111046]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:02:23 compute-0 sudo[111044]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:24 compute-0 sudo[111197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stfzumttfpidmcrtzffmhmdmotfdgkmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431743.995594-243-48847246969518/AnsiballZ_command.py'
Oct 02 19:02:24 compute-0 sudo[111197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:24 compute-0 python3.9[111199]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:02:24 compute-0 sudo[111197]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:25 compute-0 sudo[111350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymgcvwtbtzsuvjqkynzrqfoutpkoodbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431744.8732271-243-274124404441748/AnsiballZ_command.py'
Oct 02 19:02:25 compute-0 sudo[111350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:25 compute-0 python3.9[111352]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:02:25 compute-0 sudo[111350]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:25 compute-0 sudo[111503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyioexvoawbxbekngemphbyfloyvvwiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431745.5905015-243-215957819654739/AnsiballZ_command.py'
Oct 02 19:02:25 compute-0 sudo[111503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:26 compute-0 python3.9[111505]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:02:26 compute-0 sudo[111503]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:26 compute-0 sudo[111656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crpoaxblsptracegmufnljugormkebqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431746.2818563-243-91703303672992/AnsiballZ_command.py'
Oct 02 19:02:26 compute-0 sudo[111656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:26 compute-0 python3.9[111658]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:02:26 compute-0 sudo[111656]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:27 compute-0 sudo[111809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkmicieifbastzgmjghofnnvujxqskfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431747.3259246-297-119587632629801/AnsiballZ_getent.py'
Oct 02 19:02:27 compute-0 sudo[111809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:28 compute-0 python3.9[111811]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 02 19:02:28 compute-0 sudo[111809]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:28 compute-0 sudo[111962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zegntfbwyjozazzfggjrrlhuklzvbjnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431748.3050253-305-204933410051814/AnsiballZ_group.py'
Oct 02 19:02:29 compute-0 sudo[111962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:29 compute-0 python3.9[111964]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 19:02:29 compute-0 groupadd[111965]: group added to /etc/group: name=libvirt, GID=42473
Oct 02 19:02:29 compute-0 groupadd[111965]: group added to /etc/gshadow: name=libvirt
Oct 02 19:02:29 compute-0 groupadd[111965]: new group: name=libvirt, GID=42473
Oct 02 19:02:29 compute-0 sudo[111962]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:30 compute-0 sudo[112120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpazyaaqkcqnhimodfjtngvmqaaocalm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431749.5759265-313-219155897691380/AnsiballZ_user.py'
Oct 02 19:02:30 compute-0 sudo[112120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:30 compute-0 python3.9[112122]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 19:02:30 compute-0 useradd[112124]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 19:02:30 compute-0 sudo[112120]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:31 compute-0 sudo[112280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbivdmqehkihmzyqsnomboumosbmmioz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431750.838712-324-7662188353944/AnsiballZ_setup.py'
Oct 02 19:02:31 compute-0 sudo[112280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:31 compute-0 python3.9[112282]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:02:31 compute-0 sudo[112280]: pam_unix(sudo:session): session closed for user root
Oct 02 19:02:32 compute-0 sudo[112364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozpxfqpqfhkcldbrkwfdewxpgfzwxcfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431750.838712-324-7662188353944/AnsiballZ_dnf.py'
Oct 02 19:02:32 compute-0 sudo[112364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:02:32 compute-0 python3.9[112366]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:02:46 compute-0 podman[112551]: 2025-10-02 19:02:46.719014925 +0000 UTC m=+0.085937426 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:02:46 compute-0 podman[112552]: 2025-10-02 19:02:46.79414298 +0000 UTC m=+0.156536716 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 19:02:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:02:47.433 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:02:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:02:47.434 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:02:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:02:47.434 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:03:00 compute-0 kernel: SELinux:  Converting 2752 SID table entries...
Oct 02 19:03:00 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 19:03:00 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 19:03:00 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 19:03:00 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 19:03:00 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 19:03:00 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 19:03:00 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 19:03:01 compute-0 sshd-session[112609]: Connection closed by 39.162.46.234 port 61179
Oct 02 19:03:09 compute-0 kernel: SELinux:  Converting 2752 SID table entries...
Oct 02 19:03:09 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 19:03:09 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 19:03:09 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 19:03:09 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 19:03:09 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 19:03:09 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 19:03:09 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 19:03:17 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 02 19:03:17 compute-0 podman[112618]: 2025-10-02 19:03:17.701786749 +0000 UTC m=+0.071646319 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 19:03:17 compute-0 podman[112619]: 2025-10-02 19:03:17.774058884 +0000 UTC m=+0.133153479 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct 02 19:03:37 compute-0 sshd-session[121089]: Received disconnect from 80.94.93.176 port 35168:11:  [preauth]
Oct 02 19:03:37 compute-0 sshd-session[121089]: Disconnected from authenticating user root 80.94.93.176 port 35168 [preauth]
Oct 02 19:03:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:03:47.435 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:03:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:03:47.436 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:03:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:03:47.436 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:03:48 compute-0 podman[126658]: 2025-10-02 19:03:48.703412844 +0000 UTC m=+0.065475999 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 02 19:03:48 compute-0 podman[126672]: 2025-10-02 19:03:48.739147354 +0000 UTC m=+0.099717649 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 19:04:05 compute-0 kernel: SELinux:  Converting 2753 SID table entries...
Oct 02 19:04:05 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 19:04:05 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 02 19:04:05 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 19:04:05 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 02 19:04:05 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 19:04:05 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 19:04:05 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 19:04:06 compute-0 groupadd[129469]: group added to /etc/group: name=dnsmasq, GID=992
Oct 02 19:04:06 compute-0 groupadd[129469]: group added to /etc/gshadow: name=dnsmasq
Oct 02 19:04:07 compute-0 groupadd[129469]: new group: name=dnsmasq, GID=992
Oct 02 19:04:07 compute-0 useradd[129476]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 02 19:04:07 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct 02 19:04:07 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 02 19:04:07 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Oct 02 19:04:08 compute-0 groupadd[129489]: group added to /etc/group: name=clevis, GID=991
Oct 02 19:04:08 compute-0 groupadd[129489]: group added to /etc/gshadow: name=clevis
Oct 02 19:04:08 compute-0 groupadd[129489]: new group: name=clevis, GID=991
Oct 02 19:04:08 compute-0 useradd[129496]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 02 19:04:08 compute-0 usermod[129506]: add 'clevis' to group 'tss'
Oct 02 19:04:08 compute-0 usermod[129506]: add 'clevis' to shadow group 'tss'
Oct 02 19:04:10 compute-0 polkitd[6312]: Reloading rules
Oct 02 19:04:10 compute-0 polkitd[6312]: Collecting garbage unconditionally...
Oct 02 19:04:10 compute-0 polkitd[6312]: Loading rules from directory /etc/polkit-1/rules.d
Oct 02 19:04:10 compute-0 polkitd[6312]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 02 19:04:10 compute-0 polkitd[6312]: Finished loading, compiling and executing 4 rules
Oct 02 19:04:10 compute-0 polkitd[6312]: Reloading rules
Oct 02 19:04:10 compute-0 polkitd[6312]: Collecting garbage unconditionally...
Oct 02 19:04:10 compute-0 polkitd[6312]: Loading rules from directory /etc/polkit-1/rules.d
Oct 02 19:04:10 compute-0 polkitd[6312]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 02 19:04:10 compute-0 polkitd[6312]: Finished loading, compiling and executing 4 rules
Oct 02 19:04:11 compute-0 groupadd[129693]: group added to /etc/group: name=ceph, GID=167
Oct 02 19:04:11 compute-0 groupadd[129693]: group added to /etc/gshadow: name=ceph
Oct 02 19:04:11 compute-0 groupadd[129693]: new group: name=ceph, GID=167
Oct 02 19:04:11 compute-0 useradd[129699]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 02 19:04:14 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Oct 02 19:04:14 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Oct 02 19:04:14 compute-0 sshd[1008]: Received signal 15; terminating.
Oct 02 19:04:14 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Oct 02 19:04:14 compute-0 systemd[1]: sshd.service: Consumed 2.052s CPU time, read 0B from disk, written 12.0K to disk.
Oct 02 19:04:14 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Oct 02 19:04:14 compute-0 systemd[1]: Stopping sshd-keygen.target...
Oct 02 19:04:14 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 19:04:14 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 19:04:14 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 19:04:14 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 02 19:04:14 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 02 19:04:14 compute-0 sshd[130218]: Server listening on 0.0.0.0 port 22.
Oct 02 19:04:14 compute-0 sshd[130218]: Server listening on :: port 22.
Oct 02 19:04:14 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 02 19:04:17 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 19:04:17 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 19:04:17 compute-0 systemd[1]: Reloading.
Oct 02 19:04:17 compute-0 systemd-sysv-generator[130480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:04:17 compute-0 systemd-rc-local-generator[130474]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:04:17 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 19:04:19 compute-0 podman[132408]: 2025-10-02 19:04:19.704030689 +0000 UTC m=+0.075623865 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:04:19 compute-0 podman[132428]: 2025-10-02 19:04:19.74458448 +0000 UTC m=+0.106546234 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:04:20 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 02 19:04:20 compute-0 PackageKit[132879]: daemon start
Oct 02 19:04:20 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 02 19:04:20 compute-0 sudo[112364]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:21 compute-0 sudo[134541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqluylwnvpdyeovbesrqymocavemwqbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431860.5969417-336-31221433504570/AnsiballZ_systemd.py'
Oct 02 19:04:21 compute-0 sudo[134541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:21 compute-0 python3.9[134567]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 19:04:21 compute-0 systemd[1]: Reloading.
Oct 02 19:04:21 compute-0 systemd-rc-local-generator[135040]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:04:21 compute-0 systemd-sysv-generator[135044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:04:21 compute-0 sudo[134541]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:22 compute-0 sudo[135851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olsuskbsmtlkwgmqhlyxtuqxxztdwyyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431862.0375342-336-243246085451822/AnsiballZ_systemd.py'
Oct 02 19:04:22 compute-0 sudo[135851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:22 compute-0 python3.9[135870]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 19:04:22 compute-0 systemd[1]: Reloading.
Oct 02 19:04:22 compute-0 systemd-rc-local-generator[136300]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:04:22 compute-0 systemd-sysv-generator[136304]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:04:22 compute-0 sudo[135851]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:23 compute-0 sudo[137072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlodprmuldoihrjsiqigctsgyutyxwng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431863.1305218-336-28849743052782/AnsiballZ_systemd.py'
Oct 02 19:04:23 compute-0 sudo[137072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:23 compute-0 python3.9[137090]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 19:04:23 compute-0 systemd[1]: Reloading.
Oct 02 19:04:23 compute-0 systemd-rc-local-generator[137563]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:04:23 compute-0 systemd-sysv-generator[137567]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:04:24 compute-0 sudo[137072]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:24 compute-0 sudo[138416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmdvcmdncjtgmadcexnwyukvmyyvaabi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431864.179641-336-218269593906878/AnsiballZ_systemd.py'
Oct 02 19:04:24 compute-0 sudo[138416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:24 compute-0 python3.9[138427]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 19:04:24 compute-0 systemd[1]: Reloading.
Oct 02 19:04:24 compute-0 systemd-rc-local-generator[138857]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:04:24 compute-0 systemd-sysv-generator[138862]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:04:25 compute-0 sudo[138416]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:25 compute-0 sudo[139679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rblzshblkvvavzhuzheezzaloeytgbjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431865.3420007-365-34774137645450/AnsiballZ_systemd.py'
Oct 02 19:04:25 compute-0 sudo[139679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:25 compute-0 python3.9[139681]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:26 compute-0 systemd[1]: Reloading.
Oct 02 19:04:26 compute-0 systemd-rc-local-generator[139712]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:04:26 compute-0 systemd-sysv-generator[139716]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:04:26 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 19:04:26 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 19:04:26 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.313s CPU time.
Oct 02 19:04:26 compute-0 systemd[1]: run-r3ad41820cedd47c6a7b2318e27b32b3d.service: Deactivated successfully.
Oct 02 19:04:26 compute-0 sudo[139679]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:26 compute-0 sudo[139870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joukljfokudjefrfpmqmlaioskblkgdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431866.4147854-365-32989533972922/AnsiballZ_systemd.py'
Oct 02 19:04:26 compute-0 sudo[139870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:27 compute-0 python3.9[139872]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:27 compute-0 systemd[1]: Reloading.
Oct 02 19:04:27 compute-0 systemd-sysv-generator[139904]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:04:27 compute-0 systemd-rc-local-generator[139898]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:04:27 compute-0 sudo[139870]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:27 compute-0 sudo[140060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twrbmgbauxsnropczingunvhbwpqeizl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431867.4985905-365-85926427087748/AnsiballZ_systemd.py'
Oct 02 19:04:27 compute-0 sudo[140060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:28 compute-0 python3.9[140062]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:28 compute-0 systemd[1]: Reloading.
Oct 02 19:04:28 compute-0 systemd-sysv-generator[140096]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:04:28 compute-0 systemd-rc-local-generator[140093]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:04:28 compute-0 sudo[140060]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:29 compute-0 sudo[140250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zitkrhsaokggpkddrelqkmqgiccwbwlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431868.7030003-365-271373843704875/AnsiballZ_systemd.py'
Oct 02 19:04:29 compute-0 sudo[140250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:29 compute-0 python3.9[140252]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:29 compute-0 sudo[140250]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:29 compute-0 sudo[140405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsjgzheetglzvifepdwdkjszdqdjaass ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431869.660606-365-244947482557544/AnsiballZ_systemd.py'
Oct 02 19:04:29 compute-0 sudo[140405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:30 compute-0 python3.9[140407]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:30 compute-0 systemd[1]: Reloading.
Oct 02 19:04:30 compute-0 systemd-rc-local-generator[140437]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:04:30 compute-0 systemd-sysv-generator[140441]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:04:30 compute-0 sudo[140405]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:31 compute-0 sudo[140594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqjhdcnqivphokmszgxcvljdckgmlsgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431870.8833632-401-247697993002567/AnsiballZ_systemd.py'
Oct 02 19:04:31 compute-0 sudo[140594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:31 compute-0 python3.9[140596]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 19:04:31 compute-0 systemd[1]: Reloading.
Oct 02 19:04:31 compute-0 systemd-rc-local-generator[140623]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:04:31 compute-0 systemd-sysv-generator[140626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:04:31 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 02 19:04:31 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 02 19:04:31 compute-0 sudo[140594]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:32 compute-0 sudo[140786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoyqmoqildsvqihfvuruaepgdbkfhqwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431872.1568673-409-127116674381844/AnsiballZ_systemd.py'
Oct 02 19:04:32 compute-0 sudo[140786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:32 compute-0 python3.9[140788]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:32 compute-0 sudo[140786]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:33 compute-0 sudo[140941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvwmsapgltwewndnmnopdwukjixlkkcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431873.0367706-409-173809978560955/AnsiballZ_systemd.py'
Oct 02 19:04:33 compute-0 sudo[140941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:33 compute-0 python3.9[140943]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:33 compute-0 sudo[140941]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:34 compute-0 sudo[141096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjszhjqoaqbdqqypzcergapdwxytyeva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431873.9955163-409-158106969309488/AnsiballZ_systemd.py'
Oct 02 19:04:34 compute-0 sudo[141096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:34 compute-0 python3.9[141098]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:34 compute-0 sudo[141096]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:35 compute-0 sudo[141251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zttewjnxfvnglzbwpjnfmwevhjjttvkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431874.9551182-409-244992899477890/AnsiballZ_systemd.py'
Oct 02 19:04:35 compute-0 sudo[141251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:35 compute-0 python3.9[141253]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:35 compute-0 sudo[141251]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:36 compute-0 sudo[141406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjesjonscltnxymxlznqbnnbtgmvikru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431875.9684534-409-125517244606989/AnsiballZ_systemd.py'
Oct 02 19:04:36 compute-0 sudo[141406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:36 compute-0 python3.9[141408]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:36 compute-0 sudo[141406]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:37 compute-0 sudo[141561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrdgfbtoztswyhlqccrqyopytqtvnljv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431876.9236813-409-173117888775021/AnsiballZ_systemd.py'
Oct 02 19:04:37 compute-0 sudo[141561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:37 compute-0 python3.9[141563]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:37 compute-0 sudo[141561]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:38 compute-0 sudo[141716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brezpixujhemteiupyzoskzbsxjcerla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431877.859433-409-265612837812463/AnsiballZ_systemd.py'
Oct 02 19:04:38 compute-0 sudo[141716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:38 compute-0 python3.9[141718]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:38 compute-0 sudo[141716]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:39 compute-0 sudo[141871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eorxdzayctcjcyyxndgzeibwepcgvgge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431878.7397845-409-22613086293271/AnsiballZ_systemd.py'
Oct 02 19:04:39 compute-0 sudo[141871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:39 compute-0 python3.9[141873]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:39 compute-0 sudo[141871]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:39 compute-0 sudo[142026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzptkewdxgpusdzvesuhwpvnfqfmiciv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431879.587113-409-249878075395859/AnsiballZ_systemd.py'
Oct 02 19:04:39 compute-0 sudo[142026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:40 compute-0 python3.9[142028]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:40 compute-0 sudo[142026]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:40 compute-0 sudo[142181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgxmxqpjmeacojoichezjfgpkummhwiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431880.5229928-409-54689269744591/AnsiballZ_systemd.py'
Oct 02 19:04:40 compute-0 sudo[142181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:41 compute-0 python3.9[142183]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:41 compute-0 sudo[142181]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:41 compute-0 sudo[142336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swlkkrdxewtoupobrghfqojbxxewizta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431881.4272401-409-224955346185687/AnsiballZ_systemd.py'
Oct 02 19:04:41 compute-0 sudo[142336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:42 compute-0 python3.9[142338]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:42 compute-0 sudo[142336]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:42 compute-0 sudo[142491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvtokdraszudsbxqxgmwksjorajjousj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431882.3320997-409-236878841439007/AnsiballZ_systemd.py'
Oct 02 19:04:42 compute-0 sudo[142491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:42 compute-0 python3.9[142493]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:43 compute-0 sudo[142491]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:43 compute-0 sudo[142646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wahvmlnontgvtzjrivdwhdbuvkawtivf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431883.2164547-409-163710245062554/AnsiballZ_systemd.py'
Oct 02 19:04:43 compute-0 sudo[142646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:43 compute-0 python3.9[142648]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:43 compute-0 sudo[142646]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:44 compute-0 sudo[142801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwsnxrtfgwdtxzsoppmcoymiiyqukvtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431884.0348485-409-212500371655568/AnsiballZ_systemd.py'
Oct 02 19:04:44 compute-0 sudo[142801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:44 compute-0 python3.9[142803]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 19:04:44 compute-0 sudo[142801]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:45 compute-0 sudo[142956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnmgcundqrkgqqzsklgzujrtkxvoqeya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431885.064843-511-230217499775955/AnsiballZ_file.py'
Oct 02 19:04:45 compute-0 sudo[142956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:45 compute-0 python3.9[142958]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:04:45 compute-0 sudo[142956]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:46 compute-0 sudo[143108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjnxytkqdwwjgdjlfovkghwhumpnsmul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431885.7730973-511-77599433581905/AnsiballZ_file.py'
Oct 02 19:04:46 compute-0 sudo[143108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:46 compute-0 python3.9[143110]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:04:46 compute-0 sudo[143108]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:46 compute-0 sudo[143260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fncokybtznmsvhclvulekrmdmrgcenrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431886.5598083-511-23705863256782/AnsiballZ_file.py'
Oct 02 19:04:46 compute-0 sudo[143260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:47 compute-0 python3.9[143262]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:04:47 compute-0 sudo[143260]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:04:47.436 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:04:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:04:47.438 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:04:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:04:47.438 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:04:47 compute-0 sudo[143412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkfhwvshkuwjipmfyrudufsyllhtgnhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431887.2386224-511-78530820739234/AnsiballZ_file.py'
Oct 02 19:04:47 compute-0 sudo[143412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:47 compute-0 python3.9[143414]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:04:47 compute-0 sudo[143412]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:48 compute-0 sudo[143564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdkbpvwcwegtrzrcdahjjincmkcgbmsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431888.0237713-511-142289631454925/AnsiballZ_file.py'
Oct 02 19:04:48 compute-0 sudo[143564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:48 compute-0 python3.9[143566]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:04:48 compute-0 sudo[143564]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:49 compute-0 sudo[143716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eocupixbvzgvfttyfbpltszuiekyhefv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431888.7708902-511-267698567932058/AnsiballZ_file.py'
Oct 02 19:04:49 compute-0 sudo[143716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:49 compute-0 python3.9[143718]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:04:49 compute-0 sudo[143716]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:50 compute-0 sudo[143893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhkhvsqrxnuqqpfjvzqkjdgcqrxtmeol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431889.5606372-554-138010417211377/AnsiballZ_stat.py'
Oct 02 19:04:50 compute-0 sudo[143893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:50 compute-0 podman[143842]: 2025-10-02 19:04:50.14520292 +0000 UTC m=+0.080704402 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:04:50 compute-0 podman[143843]: 2025-10-02 19:04:50.153627142 +0000 UTC m=+0.097858124 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:04:50 compute-0 python3.9[143913]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:50 compute-0 sudo[143893]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:51 compute-0 sudo[144036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltevxgrssvmsaapgqgzzrpscinlknwfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431889.5606372-554-138010417211377/AnsiballZ_copy.py'
Oct 02 19:04:51 compute-0 sudo[144036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:51 compute-0 python3.9[144038]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431889.5606372-554-138010417211377/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:51 compute-0 sudo[144036]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:51 compute-0 sudo[144188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahuqrodjrjjdrwxlvfxekyhiribivqji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431891.4163933-554-135762402790762/AnsiballZ_stat.py'
Oct 02 19:04:51 compute-0 sudo[144188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:51 compute-0 python3.9[144190]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:51 compute-0 sudo[144188]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:52 compute-0 sudo[144313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdzioqgvakfjxibxmcunxrzcoefbvxut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431891.4163933-554-135762402790762/AnsiballZ_copy.py'
Oct 02 19:04:52 compute-0 sudo[144313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:52 compute-0 python3.9[144315]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431891.4163933-554-135762402790762/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:52 compute-0 sudo[144313]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:53 compute-0 sudo[144465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enrrhlirzpoayoozykikmnyssmxfjzws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431892.850498-554-12877160065658/AnsiballZ_stat.py'
Oct 02 19:04:53 compute-0 sudo[144465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:53 compute-0 python3.9[144467]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:53 compute-0 sudo[144465]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:53 compute-0 sudo[144590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzaxdclldyitxeybmpfekjkbmkcydveh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431892.850498-554-12877160065658/AnsiballZ_copy.py'
Oct 02 19:04:53 compute-0 sudo[144590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:54 compute-0 python3.9[144592]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431892.850498-554-12877160065658/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:54 compute-0 sudo[144590]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:54 compute-0 sudo[144742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woejkitzefhrukimhorppuiefzfgajju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431894.3593125-554-16800662889799/AnsiballZ_stat.py'
Oct 02 19:04:54 compute-0 sudo[144742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:54 compute-0 python3.9[144744]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:54 compute-0 sudo[144742]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:55 compute-0 sudo[144867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxvuapnaqzdijvlhlxpmfpvjrxbzlwwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431894.3593125-554-16800662889799/AnsiballZ_copy.py'
Oct 02 19:04:55 compute-0 sudo[144867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:55 compute-0 python3.9[144869]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431894.3593125-554-16800662889799/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:55 compute-0 sudo[144867]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:56 compute-0 sudo[145019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viwzgjpeyykasxgolfauyimegzzslaqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431895.8400383-554-216627261224676/AnsiballZ_stat.py'
Oct 02 19:04:56 compute-0 sudo[145019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:56 compute-0 python3.9[145021]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:56 compute-0 sudo[145019]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:57 compute-0 sudo[145144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ragnujindmqbczoalygbdjtqizftgwfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431895.8400383-554-216627261224676/AnsiballZ_copy.py'
Oct 02 19:04:57 compute-0 sudo[145144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:57 compute-0 python3.9[145146]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431895.8400383-554-216627261224676/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:57 compute-0 sudo[145144]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:57 compute-0 sudo[145296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejsfsnicejnjbrifnnvxplmuicikvuva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431897.44141-554-21492606531877/AnsiballZ_stat.py'
Oct 02 19:04:57 compute-0 sudo[145296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:58 compute-0 python3.9[145298]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:58 compute-0 sudo[145296]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:58 compute-0 sudo[145421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fylpcjucmkqfomcjnxsuuuaayqtyjycf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431897.44141-554-21492606531877/AnsiballZ_copy.py'
Oct 02 19:04:58 compute-0 sudo[145421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:58 compute-0 python3.9[145423]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431897.44141-554-21492606531877/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:04:58 compute-0 sudo[145421]: pam_unix(sudo:session): session closed for user root
Oct 02 19:04:59 compute-0 sudo[145573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyuhgizmsdmehfjkqnftueathudnspcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431899.1033306-554-219762117161390/AnsiballZ_stat.py'
Oct 02 19:04:59 compute-0 sudo[145573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:04:59 compute-0 python3.9[145575]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:04:59 compute-0 sudo[145573]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:00 compute-0 sudo[145696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbarzuytmucudoqsntdgzitholqtwuzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431899.1033306-554-219762117161390/AnsiballZ_copy.py'
Oct 02 19:05:00 compute-0 sudo[145696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:00 compute-0 python3.9[145698]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431899.1033306-554-219762117161390/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:00 compute-0 sudo[145696]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:00 compute-0 sudo[145848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frinulxgwlvcspqaeidmdzqmrquykcbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431900.611744-554-218465523136524/AnsiballZ_stat.py'
Oct 02 19:05:00 compute-0 sudo[145848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:01 compute-0 python3.9[145850]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:01 compute-0 sudo[145848]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:01 compute-0 sudo[145973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdjafgetghhtzdtsstqnovnoakkvzhcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431900.611744-554-218465523136524/AnsiballZ_copy.py'
Oct 02 19:05:01 compute-0 sudo[145973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:01 compute-0 python3.9[145975]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759431900.611744-554-218465523136524/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:01 compute-0 sudo[145973]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:02 compute-0 sudo[146125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhbysywxczxrnfzduxugggoukiiucqga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431902.1669295-667-71694799830162/AnsiballZ_command.py'
Oct 02 19:05:02 compute-0 sudo[146125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:02 compute-0 python3.9[146127]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 02 19:05:02 compute-0 sudo[146125]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:03 compute-0 sudo[146278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfujsbdryuhxsetjwrpotdhmsdqfiqqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431902.936744-676-163752918009568/AnsiballZ_file.py'
Oct 02 19:05:03 compute-0 sudo[146278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:03 compute-0 python3.9[146280]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:03 compute-0 sudo[146278]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:04 compute-0 sudo[146430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aldpvbzkmojytlxnuzevzttsnqbaizrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431903.6771476-676-136750210318285/AnsiballZ_file.py'
Oct 02 19:05:04 compute-0 sudo[146430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:04 compute-0 python3.9[146432]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:04 compute-0 sudo[146430]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:04 compute-0 sudo[146582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgtknelfzdeaqdylnzhovkkluywlbcuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431904.419096-676-227219364814826/AnsiballZ_file.py'
Oct 02 19:05:04 compute-0 sudo[146582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:04 compute-0 python3.9[146584]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:04 compute-0 sudo[146582]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:05 compute-0 sudo[146734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozxjxclavhleyjsxbpxiiklnwuzjszzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431905.1109326-676-67859430350395/AnsiballZ_file.py'
Oct 02 19:05:05 compute-0 sudo[146734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:05 compute-0 python3.9[146736]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:05 compute-0 sudo[146734]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:06 compute-0 sudo[146886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqximotvfxpavcwliwkvauxubgavfppe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431905.799905-676-19424687421597/AnsiballZ_file.py'
Oct 02 19:05:06 compute-0 sudo[146886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:06 compute-0 python3.9[146888]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:06 compute-0 sudo[146886]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:06 compute-0 sudo[147038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yegrtzosltjwwguoqxejwzesxjwxygkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431906.5234725-676-170441649920578/AnsiballZ_file.py'
Oct 02 19:05:06 compute-0 sudo[147038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:07 compute-0 python3.9[147040]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:07 compute-0 sudo[147038]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:07 compute-0 sudo[147190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aitxsbvskvhjywncsarvlosarhcgydly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431907.2459817-676-223945634547416/AnsiballZ_file.py'
Oct 02 19:05:07 compute-0 sudo[147190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:07 compute-0 python3.9[147192]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:07 compute-0 sudo[147190]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:08 compute-0 sudo[147342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcfkelmoxmgluwcnxggqnbapssqywijg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431907.9837708-676-2746154232869/AnsiballZ_file.py'
Oct 02 19:05:08 compute-0 sudo[147342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:08 compute-0 python3.9[147344]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:08 compute-0 sudo[147342]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:09 compute-0 sudo[147494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozbauymrxidgcmdjbiymhjwpaqccyxaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431908.720305-676-237346676518364/AnsiballZ_file.py'
Oct 02 19:05:09 compute-0 sudo[147494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:09 compute-0 python3.9[147496]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:09 compute-0 sudo[147494]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:09 compute-0 sudo[147646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loecovcjqqidbwwfmlymuzyohzrohldb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431909.5250785-676-101292744320301/AnsiballZ_file.py'
Oct 02 19:05:09 compute-0 sudo[147646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:10 compute-0 python3.9[147648]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:10 compute-0 sudo[147646]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:10 compute-0 sudo[147798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiwdoeuvmiuwwwavtooptqbfpabsypik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431910.1917272-676-128820520781579/AnsiballZ_file.py'
Oct 02 19:05:10 compute-0 sudo[147798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:10 compute-0 python3.9[147800]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:10 compute-0 sudo[147798]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:11 compute-0 sudo[147950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkzvtacjirhkklnfozemhxikmftdhepo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431911.0179758-676-32603524116506/AnsiballZ_file.py'
Oct 02 19:05:11 compute-0 sudo[147950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:11 compute-0 python3.9[147952]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:11 compute-0 sudo[147950]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:12 compute-0 sudo[148102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aixtrsaiuviajvomrcsszeukjbzvmphf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431911.798851-676-75083691830002/AnsiballZ_file.py'
Oct 02 19:05:12 compute-0 sudo[148102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:12 compute-0 python3.9[148104]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:12 compute-0 sudo[148102]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:12 compute-0 sudo[148254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yycywcfrzwfpkiirgtufgxstuwlkdedc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431912.5651138-676-42480573702925/AnsiballZ_file.py'
Oct 02 19:05:12 compute-0 sudo[148254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:13 compute-0 python3.9[148256]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:13 compute-0 sudo[148254]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:13 compute-0 sudo[148406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvjomegcbkrkniawvprqvomvukctfldx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431913.404611-775-185476300562180/AnsiballZ_stat.py'
Oct 02 19:05:13 compute-0 sudo[148406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:13 compute-0 python3.9[148408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:13 compute-0 sudo[148406]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:14 compute-0 sudo[148529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvssugcvgnalncrisktsdlrdphhwofkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431913.404611-775-185476300562180/AnsiballZ_copy.py'
Oct 02 19:05:14 compute-0 sudo[148529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:14 compute-0 python3.9[148531]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431913.404611-775-185476300562180/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:14 compute-0 sudo[148529]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:15 compute-0 sudo[148681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azoradlvksizgvsubswjtrlcipnpumkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431914.7739463-775-180567803059042/AnsiballZ_stat.py'
Oct 02 19:05:15 compute-0 sudo[148681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:15 compute-0 python3.9[148683]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:15 compute-0 sudo[148681]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:15 compute-0 sudo[148804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgiynledfqvpthttyccjgwwgypejupfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431914.7739463-775-180567803059042/AnsiballZ_copy.py'
Oct 02 19:05:15 compute-0 sudo[148804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:16 compute-0 python3.9[148806]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431914.7739463-775-180567803059042/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:16 compute-0 sudo[148804]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:16 compute-0 sudo[148956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgkolxgncbwafhbpjopepjfjzqoycuqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431916.3899918-775-80187310358047/AnsiballZ_stat.py'
Oct 02 19:05:16 compute-0 sudo[148956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:16 compute-0 python3.9[148958]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:16 compute-0 sudo[148956]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:17 compute-0 sudo[149079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frhmrrbvjrdeidcacnvfjbjkhnqgntez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431916.3899918-775-80187310358047/AnsiballZ_copy.py'
Oct 02 19:05:17 compute-0 sudo[149079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:17 compute-0 python3.9[149081]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431916.3899918-775-80187310358047/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:17 compute-0 sudo[149079]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:18 compute-0 sudo[149231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvpyjzsvaucpjwvscyikqermrsnnfsaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431917.7582185-775-127625273340398/AnsiballZ_stat.py'
Oct 02 19:05:18 compute-0 sudo[149231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:18 compute-0 python3.9[149233]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:18 compute-0 sudo[149231]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:18 compute-0 sudo[149354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpjvpgmacamtvzbwkdlqfmjqzhtgjsio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431917.7582185-775-127625273340398/AnsiballZ_copy.py'
Oct 02 19:05:18 compute-0 sudo[149354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:18 compute-0 python3.9[149356]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431917.7582185-775-127625273340398/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:18 compute-0 sudo[149354]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:19 compute-0 sudo[149506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eamrfliweqtuleflnlgxtubmkijikxrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431919.1423419-775-238375265488752/AnsiballZ_stat.py'
Oct 02 19:05:19 compute-0 sudo[149506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:19 compute-0 python3.9[149508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:19 compute-0 sudo[149506]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:20 compute-0 sudo[149654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxfgkkywwtbumbwgahslckpaclynjtww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431919.1423419-775-238375265488752/AnsiballZ_copy.py'
Oct 02 19:05:20 compute-0 sudo[149654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:20 compute-0 podman[149603]: 2025-10-02 19:05:20.438632389 +0000 UTC m=+0.106472722 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:05:20 compute-0 podman[149604]: 2025-10-02 19:05:20.446945148 +0000 UTC m=+0.109444810 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 19:05:20 compute-0 python3.9[149666]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431919.1423419-775-238375265488752/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:20 compute-0 sudo[149654]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:21 compute-0 sudo[149823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieedmrzswirtoiciihdthsdwizqnxpye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431920.8284209-775-13528349590235/AnsiballZ_stat.py'
Oct 02 19:05:21 compute-0 sudo[149823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:21 compute-0 python3.9[149825]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:21 compute-0 sudo[149823]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:21 compute-0 sudo[149946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jylwwejdeaxygahqzhmaiplrwuliltnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431920.8284209-775-13528349590235/AnsiballZ_copy.py'
Oct 02 19:05:21 compute-0 sudo[149946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:21 compute-0 python3.9[149948]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431920.8284209-775-13528349590235/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:22 compute-0 sudo[149946]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:22 compute-0 sudo[150098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btibtwrlblkxowcalhocaoxqtkalmisy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431922.1702378-775-6241351972293/AnsiballZ_stat.py'
Oct 02 19:05:22 compute-0 sudo[150098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:22 compute-0 python3.9[150100]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:22 compute-0 sudo[150098]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:23 compute-0 sudo[150221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihqwurqxlzztgouskueybprwelwbsomg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431922.1702378-775-6241351972293/AnsiballZ_copy.py'
Oct 02 19:05:23 compute-0 sudo[150221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:23 compute-0 python3.9[150223]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431922.1702378-775-6241351972293/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:23 compute-0 sudo[150221]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:23 compute-0 sudo[150373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpiaizfimzpftzahbbqogbeciyzufvlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431923.5198438-775-74254257697492/AnsiballZ_stat.py'
Oct 02 19:05:23 compute-0 sudo[150373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:24 compute-0 python3.9[150375]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:24 compute-0 sudo[150373]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:24 compute-0 sudo[150496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtgdsnlylpkebllgxbvmiepbhprctwgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431923.5198438-775-74254257697492/AnsiballZ_copy.py'
Oct 02 19:05:24 compute-0 sudo[150496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:24 compute-0 python3.9[150498]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431923.5198438-775-74254257697492/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:24 compute-0 sudo[150496]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:25 compute-0 sudo[150648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjfflyhvfyqazkkozwwlskrrijbhcbfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431924.8155942-775-6420742442391/AnsiballZ_stat.py'
Oct 02 19:05:25 compute-0 sudo[150648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:25 compute-0 python3.9[150650]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:25 compute-0 sudo[150648]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:25 compute-0 sudo[150771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iescnxtfzuyiglkqchtinsttwfnekrpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431924.8155942-775-6420742442391/AnsiballZ_copy.py'
Oct 02 19:05:25 compute-0 sudo[150771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:25 compute-0 python3.9[150773]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431924.8155942-775-6420742442391/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:25 compute-0 sudo[150771]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:26 compute-0 sudo[150923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-verzskgcehwordimcrdqwokqvastjnie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431926.12879-775-120510682538946/AnsiballZ_stat.py'
Oct 02 19:05:26 compute-0 sudo[150923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:26 compute-0 python3.9[150925]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:26 compute-0 sudo[150923]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:26 compute-0 sudo[151046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xovktxhxrpsdvozxmoujqyygqmyinwyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431926.12879-775-120510682538946/AnsiballZ_copy.py'
Oct 02 19:05:26 compute-0 sudo[151046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:27 compute-0 python3.9[151048]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431926.12879-775-120510682538946/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:27 compute-0 sudo[151046]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:27 compute-0 sudo[151198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stxmnuelmclxukkccxoimaergbajcynj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431927.3158643-775-123346041343353/AnsiballZ_stat.py'
Oct 02 19:05:27 compute-0 sudo[151198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:27 compute-0 python3.9[151200]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:27 compute-0 sudo[151198]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:28 compute-0 sudo[151321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drlqmtrlibpzlzwnfbfyyahhxxgkuoer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431927.3158643-775-123346041343353/AnsiballZ_copy.py'
Oct 02 19:05:28 compute-0 sudo[151321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:28 compute-0 python3.9[151323]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431927.3158643-775-123346041343353/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:28 compute-0 sudo[151321]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:29 compute-0 sudo[151473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rocwcsxeotymbiocwglcgmwshvmafdcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431928.6231532-775-188862598366316/AnsiballZ_stat.py'
Oct 02 19:05:29 compute-0 sudo[151473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:29 compute-0 python3.9[151475]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:29 compute-0 sudo[151473]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:29 compute-0 sudo[151596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwogxvcfzkqexaqdvnjffciseebommpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431928.6231532-775-188862598366316/AnsiballZ_copy.py'
Oct 02 19:05:29 compute-0 sudo[151596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:29 compute-0 python3.9[151598]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431928.6231532-775-188862598366316/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:29 compute-0 sudo[151596]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:30 compute-0 sudo[151748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcwugdiwdkwgxappiauwiwmsexfovykv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431930.0496597-775-51633245746315/AnsiballZ_stat.py'
Oct 02 19:05:30 compute-0 sudo[151748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:30 compute-0 python3.9[151750]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:30 compute-0 sudo[151748]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:31 compute-0 sudo[151871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ninsorqzdfdxzkboukudxdvuryzcyzeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431930.0496597-775-51633245746315/AnsiballZ_copy.py'
Oct 02 19:05:31 compute-0 sudo[151871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:31 compute-0 python3.9[151873]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431930.0496597-775-51633245746315/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:31 compute-0 sudo[151871]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:31 compute-0 sudo[152023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqjwvavkfdxxnjwrqaqlydjqdsxmmplg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431931.3918245-775-230616693522507/AnsiballZ_stat.py'
Oct 02 19:05:31 compute-0 sudo[152023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:31 compute-0 python3.9[152025]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:31 compute-0 sudo[152023]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:32 compute-0 sudo[152146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgqrwptquzhwfsrxycgludxwnlqyfnbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431931.3918245-775-230616693522507/AnsiballZ_copy.py'
Oct 02 19:05:32 compute-0 sudo[152146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:32 compute-0 python3.9[152148]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431931.3918245-775-230616693522507/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:32 compute-0 sudo[152146]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:33 compute-0 python3.9[152298]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:05:34 compute-0 sudo[152451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkvnywwqrorhkvgxnejnobwfrjlibeye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431933.5734136-981-242733849770178/AnsiballZ_seboolean.py'
Oct 02 19:05:34 compute-0 sudo[152451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:34 compute-0 python3.9[152453]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 02 19:05:35 compute-0 sudo[152451]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:36 compute-0 sudo[152607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfgtasqinuyjcmiaoybmnuyzbabmbesc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431935.802009-989-76958436355071/AnsiballZ_copy.py'
Oct 02 19:05:36 compute-0 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 02 19:05:36 compute-0 sudo[152607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:36 compute-0 python3.9[152609]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:36 compute-0 sudo[152607]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:36 compute-0 sudo[152759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qccrmtmxlnuuhjqatctuemjjiuqgvwxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431936.5055351-989-198060151072865/AnsiballZ_copy.py'
Oct 02 19:05:36 compute-0 sudo[152759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:36 compute-0 python3.9[152761]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:37 compute-0 sudo[152759]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:37 compute-0 sudo[152911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmiuukiueeiusdxuzsdvmchwxljgvfvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431937.180256-989-55118402531747/AnsiballZ_copy.py'
Oct 02 19:05:37 compute-0 sudo[152911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:37 compute-0 python3.9[152913]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:37 compute-0 sudo[152911]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:38 compute-0 sudo[153063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqqjsibcpmcbwzbjncdhdmlykqcdvlzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431937.8874965-989-86555231541923/AnsiballZ_copy.py'
Oct 02 19:05:38 compute-0 sudo[153063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:38 compute-0 python3.9[153065]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:38 compute-0 sudo[153063]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:38 compute-0 sudo[153215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxkcnmdhnhwzinxzqqnpallzjumucybn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431938.5372868-989-106812384686857/AnsiballZ_copy.py'
Oct 02 19:05:38 compute-0 sudo[153215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:39 compute-0 python3.9[153217]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:39 compute-0 sudo[153215]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:39 compute-0 sudo[153367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsagtkwrvfcuxwjirjmvpjwjrsyrjhlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431939.2715874-1025-100555097889856/AnsiballZ_copy.py'
Oct 02 19:05:39 compute-0 sudo[153367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:39 compute-0 python3.9[153369]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:39 compute-0 sudo[153367]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:40 compute-0 sudo[153519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thklgqfupymgbgrpoumcbpdkcfgakahu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431939.9976928-1025-99586553673375/AnsiballZ_copy.py'
Oct 02 19:05:40 compute-0 sudo[153519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:40 compute-0 python3.9[153521]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:40 compute-0 sudo[153519]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:41 compute-0 sudo[153671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwuruaohucxjtcdnvfqzzotiirxpfwbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431940.8103318-1025-87310902522236/AnsiballZ_copy.py'
Oct 02 19:05:41 compute-0 sudo[153671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:41 compute-0 python3.9[153673]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:41 compute-0 sudo[153671]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:41 compute-0 sudo[153823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxyruwiiwqgvbdlyeqxiofoyahcdclti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431941.6319263-1025-202010974592593/AnsiballZ_copy.py'
Oct 02 19:05:41 compute-0 sudo[153823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:42 compute-0 python3.9[153825]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:42 compute-0 sudo[153823]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:42 compute-0 sudo[153975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbdnednnnqlbcifzspbxzijplcyqbxld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431942.4143918-1025-24102188235455/AnsiballZ_copy.py'
Oct 02 19:05:42 compute-0 sudo[153975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:42 compute-0 python3.9[153977]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:43 compute-0 sudo[153975]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:43 compute-0 sudo[154127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udqqskprrcdyipyxioynypwrxlsotkkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431943.2497027-1061-148947485229922/AnsiballZ_systemd.py'
Oct 02 19:05:43 compute-0 sudo[154127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:43 compute-0 python3.9[154129]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:05:43 compute-0 systemd[1]: Reloading.
Oct 02 19:05:44 compute-0 systemd-rc-local-generator[154160]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:05:44 compute-0 systemd-sysv-generator[154163]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:05:44 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Oct 02 19:05:44 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Oct 02 19:05:44 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 02 19:05:44 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 02 19:05:44 compute-0 systemd[1]: Starting libvirt logging daemon...
Oct 02 19:05:44 compute-0 systemd[1]: Started libvirt logging daemon.
Oct 02 19:05:44 compute-0 sudo[154127]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:44 compute-0 sudo[154320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvavpebzqwtqpsrrcovalfedjxbvnlhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431944.5337052-1061-87108437971061/AnsiballZ_systemd.py'
Oct 02 19:05:44 compute-0 sudo[154320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:45 compute-0 python3.9[154322]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:05:45 compute-0 systemd[1]: Reloading.
Oct 02 19:05:45 compute-0 systemd-rc-local-generator[154350]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:05:45 compute-0 systemd-sysv-generator[154354]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:05:45 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 02 19:05:45 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 02 19:05:45 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 02 19:05:45 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 02 19:05:45 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 02 19:05:45 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 02 19:05:45 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 02 19:05:45 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 02 19:05:45 compute-0 sudo[154320]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:46 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 02 19:05:46 compute-0 sudo[154536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nekmcoyodsyxkidobzepnpsyzlpdagho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431945.7141755-1061-156436913094621/AnsiballZ_systemd.py'
Oct 02 19:05:46 compute-0 sudo[154536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:46 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 02 19:05:46 compute-0 python3.9[154538]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:05:46 compute-0 systemd[1]: Reloading.
Oct 02 19:05:46 compute-0 systemd-sysv-generator[154565]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:05:46 compute-0 systemd-rc-local-generator[154561]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:05:46 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 02 19:05:46 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 02 19:05:46 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 02 19:05:46 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 02 19:05:46 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 19:05:46 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 02 19:05:46 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 02 19:05:46 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 19:05:46 compute-0 sudo[154536]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:47 compute-0 sudo[154753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffswolvtkaxbcwlvoaejqacdgknfqufw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431947.0316086-1061-226005985110108/AnsiballZ_systemd.py'
Oct 02 19:05:47 compute-0 sudo[154753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:05:47.438 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:05:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:05:47.441 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:05:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:05:47.441 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:05:47 compute-0 python3.9[154755]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:05:47 compute-0 systemd[1]: Reloading.
Oct 02 19:05:47 compute-0 setroubleshoot[154509]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l c1b027b3-f5de-48d4-94d6-740993a43cab
Oct 02 19:05:47 compute-0 setroubleshoot[154509]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 02 19:05:47 compute-0 setroubleshoot[154509]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l c1b027b3-f5de-48d4-94d6-740993a43cab
Oct 02 19:05:47 compute-0 setroubleshoot[154509]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 02 19:05:47 compute-0 systemd-sysv-generator[154787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:05:47 compute-0 systemd-rc-local-generator[154783]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:05:48 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Oct 02 19:05:48 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 02 19:05:48 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 02 19:05:48 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 02 19:05:48 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 02 19:05:48 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 02 19:05:48 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 02 19:05:48 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 02 19:05:48 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 02 19:05:48 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 02 19:05:48 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 02 19:05:48 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 02 19:05:48 compute-0 sudo[154753]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:48 compute-0 sudo[154966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyropolafnqyjhmewvxjpguolnpauadt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431948.3565931-1061-153686218098087/AnsiballZ_systemd.py'
Oct 02 19:05:48 compute-0 sudo[154966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:48 compute-0 python3.9[154968]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:05:49 compute-0 systemd[1]: Reloading.
Oct 02 19:05:49 compute-0 systemd-rc-local-generator[154995]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:05:49 compute-0 systemd-sysv-generator[155000]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:05:49 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Oct 02 19:05:49 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Oct 02 19:05:49 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 02 19:05:49 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 02 19:05:49 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 02 19:05:49 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 02 19:05:49 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 02 19:05:49 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 02 19:05:49 compute-0 sudo[154966]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:50 compute-0 sudo[155176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwlytkylsuilkbkitljtwcmqqsqeilih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431949.7150416-1098-42579997377466/AnsiballZ_file.py'
Oct 02 19:05:50 compute-0 sudo[155176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:50 compute-0 python3.9[155178]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:50 compute-0 sudo[155176]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:50 compute-0 podman[155260]: 2025-10-02 19:05:50.689521615 +0000 UTC m=+0.061688734 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 02 19:05:50 compute-0 podman[155274]: 2025-10-02 19:05:50.721629491 +0000 UTC m=+0.094134280 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:05:50 compute-0 sudo[155375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sssbdezakpvoggpxvyvxwgtykiutuhxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431950.4738595-1106-76736687169370/AnsiballZ_find.py'
Oct 02 19:05:50 compute-0 sudo[155375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:51 compute-0 python3.9[155377]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:05:51 compute-0 sudo[155375]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:51 compute-0 sudo[155527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcoalwpkqttwduhdfljlttgtqphjjkby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431951.4767094-1120-248780274740190/AnsiballZ_stat.py'
Oct 02 19:05:51 compute-0 sudo[155527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:52 compute-0 python3.9[155529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:52 compute-0 sudo[155527]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:52 compute-0 sudo[155650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxyqyitxjmdtwqbgrcdqvkmmhheaukmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431951.4767094-1120-248780274740190/AnsiballZ_copy.py'
Oct 02 19:05:52 compute-0 sudo[155650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:52 compute-0 python3.9[155652]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431951.4767094-1120-248780274740190/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:52 compute-0 sudo[155650]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:53 compute-0 sudo[155802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpwqlohydilpxdfpevouserxdycpkksn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431953.1502051-1136-244568233983538/AnsiballZ_file.py'
Oct 02 19:05:53 compute-0 sudo[155802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:53 compute-0 python3.9[155804]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:53 compute-0 sudo[155802]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:54 compute-0 sudo[155954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avgvfdoixdcjvnumwtaddljnwjdizzlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431954.0182002-1144-10300142270282/AnsiballZ_stat.py'
Oct 02 19:05:54 compute-0 sudo[155954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:54 compute-0 python3.9[155956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:54 compute-0 sudo[155954]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:55 compute-0 sudo[156032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzettloukrrrxzpeugshyeghjhiznmzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431954.0182002-1144-10300142270282/AnsiballZ_file.py'
Oct 02 19:05:55 compute-0 sudo[156032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:55 compute-0 python3.9[156034]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:55 compute-0 sudo[156032]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:55 compute-0 sudo[156184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npxguyjrdganybrlpwgthnisuvdgmyeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431955.4735823-1156-181872227745792/AnsiballZ_stat.py'
Oct 02 19:05:55 compute-0 sudo[156184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:56 compute-0 python3.9[156186]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:56 compute-0 sudo[156184]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:56 compute-0 sudo[156262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqmkdqihuuhdbrnrnwmygtgtlhbirlzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431955.4735823-1156-181872227745792/AnsiballZ_file.py'
Oct 02 19:05:56 compute-0 sudo[156262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:56 compute-0 python3.9[156264]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ekhljd23 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:56 compute-0 sudo[156262]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:57 compute-0 sudo[156414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mconblsjknbotwpapfyoecliddrzxjma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431956.8654065-1168-139913115737818/AnsiballZ_stat.py'
Oct 02 19:05:57 compute-0 sudo[156414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:57 compute-0 python3.9[156416]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:05:57 compute-0 sudo[156414]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:57 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 02 19:05:57 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.030s CPU time.
Oct 02 19:05:57 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 02 19:05:57 compute-0 sudo[156492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clebziywvvxkcwbdbdzdjchndazqsdox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431956.8654065-1168-139913115737818/AnsiballZ_file.py'
Oct 02 19:05:57 compute-0 sudo[156492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:58 compute-0 python3.9[156494]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:05:58 compute-0 sudo[156492]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:58 compute-0 sudo[156644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bupfhdeormuavfjnzfgqomloksnmcqqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431958.471947-1181-90127183824364/AnsiballZ_command.py'
Oct 02 19:05:58 compute-0 sudo[156644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:59 compute-0 python3.9[156646]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:05:59 compute-0 sudo[156644]: pam_unix(sudo:session): session closed for user root
Oct 02 19:05:59 compute-0 sudo[156797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poxnbyipxzuauthvpfoamwxzvafscaap ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759431959.2308795-1189-172770358596228/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 19:05:59 compute-0 sudo[156797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:05:59 compute-0 python3[156799]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 19:05:59 compute-0 sudo[156797]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:00 compute-0 sudo[156949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npcflaefrsbgaemwtbaginsspbjqonux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431960.188579-1197-35922112115211/AnsiballZ_stat.py'
Oct 02 19:06:00 compute-0 sudo[156949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:00 compute-0 python3.9[156951]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:00 compute-0 sudo[156949]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:01 compute-0 sudo[157027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfajhudcwllqljcfkfssoatlcdxhimkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431960.188579-1197-35922112115211/AnsiballZ_file.py'
Oct 02 19:06:01 compute-0 sudo[157027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:01 compute-0 python3.9[157029]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:01 compute-0 sudo[157027]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:02 compute-0 sudo[157179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xivvvplbtvjnxegwwqawbavmdqbtqgam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431961.552825-1209-252595116728116/AnsiballZ_stat.py'
Oct 02 19:06:02 compute-0 sudo[157179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:02 compute-0 python3.9[157181]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:02 compute-0 sudo[157179]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:02 compute-0 sudo[157257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkpkkeyhwxmnenzfsazpjeaaprpenfja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431961.552825-1209-252595116728116/AnsiballZ_file.py'
Oct 02 19:06:02 compute-0 sudo[157257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:02 compute-0 python3.9[157259]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:02 compute-0 sudo[157257]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:03 compute-0 sudo[157409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlxazmjqnezulbexxsgfnskpzjnqsypg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431963.0570042-1221-107920079686499/AnsiballZ_stat.py'
Oct 02 19:06:03 compute-0 sudo[157409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:03 compute-0 python3.9[157411]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:03 compute-0 sudo[157409]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:03 compute-0 sudo[157487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfnqepvpivepivufkajennzapqqewqhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431963.0570042-1221-107920079686499/AnsiballZ_file.py'
Oct 02 19:06:03 compute-0 sudo[157487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:04 compute-0 python3.9[157489]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:04 compute-0 sudo[157487]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:04 compute-0 sudo[157639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpqlmcuqbfqytcxgjrlfuxqvtaojcsjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431964.5386205-1233-88374253508332/AnsiballZ_stat.py'
Oct 02 19:06:04 compute-0 sudo[157639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:05 compute-0 python3.9[157641]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:05 compute-0 sudo[157639]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:05 compute-0 sudo[157717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjketrdfmpddcqybndhsokanjgmdvnqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431964.5386205-1233-88374253508332/AnsiballZ_file.py'
Oct 02 19:06:05 compute-0 sudo[157717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:05 compute-0 python3.9[157719]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:05 compute-0 sudo[157717]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:06 compute-0 sudo[157869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owbrndpxxslnehgwypuohacyrwwlteri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431965.8873682-1245-210207941841795/AnsiballZ_stat.py'
Oct 02 19:06:06 compute-0 sudo[157869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:06 compute-0 python3.9[157871]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:06 compute-0 sudo[157869]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:06 compute-0 sudo[157994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmmmkgdaykgeiatlnccxfebbzpgshigk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431965.8873682-1245-210207941841795/AnsiballZ_copy.py'
Oct 02 19:06:06 compute-0 sudo[157994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:07 compute-0 python3.9[157996]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759431965.8873682-1245-210207941841795/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:07 compute-0 sudo[157994]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:07 compute-0 sudo[158146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bucbyzxccqcmikpsfuzptirzcxxejbgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431967.387229-1260-190598587274893/AnsiballZ_file.py'
Oct 02 19:06:07 compute-0 sudo[158146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:07 compute-0 python3.9[158148]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:07 compute-0 sudo[158146]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:08 compute-0 sudo[158298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yopknoglqmxsdfzbnpwdcggztdmmtzwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431968.1550498-1268-1856772960145/AnsiballZ_command.py'
Oct 02 19:06:08 compute-0 sudo[158298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:08 compute-0 python3.9[158300]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:06:08 compute-0 sudo[158298]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:09 compute-0 sudo[158453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqvvetiulweljxeiwcvyxjrvscngmbzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431968.8967054-1276-162073523324688/AnsiballZ_blockinfile.py'
Oct 02 19:06:09 compute-0 sudo[158453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:09 compute-0 python3.9[158455]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:09 compute-0 sudo[158453]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:10 compute-0 sudo[158605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saawshogjkociusyoyykeiebbsdczqrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431969.8093233-1285-91520331286389/AnsiballZ_command.py'
Oct 02 19:06:10 compute-0 sudo[158605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:10 compute-0 python3.9[158607]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:06:10 compute-0 sudo[158605]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:10 compute-0 sudo[158758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdiacsffgtaunjsdnklfaalcovtrrpnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431970.5536737-1293-103286878927829/AnsiballZ_stat.py'
Oct 02 19:06:10 compute-0 sudo[158758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:11 compute-0 python3.9[158760]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:06:11 compute-0 sudo[158758]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:11 compute-0 sudo[158912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfdiiitetrmieufrwknmozlbpheajpve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431971.3513718-1301-64510133384144/AnsiballZ_command.py'
Oct 02 19:06:11 compute-0 sudo[158912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:11 compute-0 python3.9[158914]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:06:11 compute-0 sudo[158912]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:12 compute-0 sudo[159067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdpcqavriqytkzecivzpfzchqkvvsuxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431972.177034-1309-94366281055508/AnsiballZ_file.py'
Oct 02 19:06:12 compute-0 sudo[159067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:12 compute-0 python3.9[159069]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:12 compute-0 sudo[159067]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:13 compute-0 sudo[159219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tllailewdfxvvqgwqohnacgpkfcebxiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431972.9870014-1317-186289794773518/AnsiballZ_stat.py'
Oct 02 19:06:13 compute-0 sudo[159219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:13 compute-0 python3.9[159221]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:13 compute-0 sudo[159219]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:13 compute-0 sudo[159342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhoybttxvzkjyojipnjeaxrhqtcbqqal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431972.9870014-1317-186289794773518/AnsiballZ_copy.py'
Oct 02 19:06:13 compute-0 sudo[159342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:14 compute-0 python3.9[159344]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431972.9870014-1317-186289794773518/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:14 compute-0 sudo[159342]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:14 compute-0 sudo[159494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvjmdidzevmfwewclbmfolmciubcjyud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431974.3726451-1332-246852209382096/AnsiballZ_stat.py'
Oct 02 19:06:14 compute-0 sudo[159494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:14 compute-0 python3.9[159496]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:14 compute-0 sudo[159494]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:15 compute-0 sudo[159617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekrprdptacopgtpexhnvessvbvgqbflg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431974.3726451-1332-246852209382096/AnsiballZ_copy.py'
Oct 02 19:06:15 compute-0 sudo[159617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:15 compute-0 python3.9[159619]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431974.3726451-1332-246852209382096/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:15 compute-0 sudo[159617]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:16 compute-0 sudo[159769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-negsowhabkcxcwdtjrhqrlxorqeixfsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431975.8869436-1347-117260737975118/AnsiballZ_stat.py'
Oct 02 19:06:16 compute-0 sudo[159769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:16 compute-0 python3.9[159771]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:16 compute-0 sudo[159769]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:16 compute-0 sudo[159892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhqwxnaszxgzfzbtkjgoayqgkilsfbqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431975.8869436-1347-117260737975118/AnsiballZ_copy.py'
Oct 02 19:06:16 compute-0 sudo[159892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:17 compute-0 python3.9[159894]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759431975.8869436-1347-117260737975118/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:17 compute-0 sudo[159892]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:17 compute-0 sudo[160044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgidnrycdozbhrwtdqshagmlyephodba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431977.3088477-1362-153878316786024/AnsiballZ_systemd.py'
Oct 02 19:06:17 compute-0 sudo[160044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:17 compute-0 python3.9[160046]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:06:17 compute-0 systemd[1]: Reloading.
Oct 02 19:06:18 compute-0 systemd-sysv-generator[160073]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:06:18 compute-0 systemd-rc-local-generator[160069]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:06:18 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Oct 02 19:06:18 compute-0 sudo[160044]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:18 compute-0 sudo[160236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uupranwdlzkhakuwmgzvsrgpgpidduff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431978.5340466-1370-149069268409842/AnsiballZ_systemd.py'
Oct 02 19:06:18 compute-0 sudo[160236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:19 compute-0 python3.9[160238]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 02 19:06:19 compute-0 systemd[1]: Reloading.
Oct 02 19:06:19 compute-0 systemd-rc-local-generator[160266]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:06:19 compute-0 systemd-sysv-generator[160270]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:06:19 compute-0 systemd[1]: Reloading.
Oct 02 19:06:19 compute-0 systemd-sysv-generator[160306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:06:19 compute-0 systemd-rc-local-generator[160302]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:06:19 compute-0 sudo[160236]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:20 compute-0 sshd-session[106068]: Connection closed by 192.168.122.30 port 42568
Oct 02 19:06:20 compute-0 sshd-session[106065]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:06:20 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Oct 02 19:06:20 compute-0 systemd[1]: session-23.scope: Consumed 3min 40.520s CPU time.
Oct 02 19:06:20 compute-0 systemd-logind[798]: Session 23 logged out. Waiting for processes to exit.
Oct 02 19:06:20 compute-0 systemd-logind[798]: Removed session 23.
Oct 02 19:06:21 compute-0 podman[160334]: 2025-10-02 19:06:21.726547051 +0000 UTC m=+0.093381469 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:06:21 compute-0 podman[160335]: 2025-10-02 19:06:21.749573162 +0000 UTC m=+0.121766255 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 19:06:26 compute-0 sshd-session[160381]: Accepted publickey for zuul from 192.168.122.30 port 41996 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 19:06:26 compute-0 systemd-logind[798]: New session 24 of user zuul.
Oct 02 19:06:26 compute-0 systemd[1]: Started Session 24 of User zuul.
Oct 02 19:06:26 compute-0 sshd-session[160381]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:06:27 compute-0 python3.9[160534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:06:28 compute-0 sudo[160688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lokxjjxehjgdkrnhbfgghtyzdnbyalne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431987.7935207-34-222424473884292/AnsiballZ_file.py'
Oct 02 19:06:28 compute-0 sudo[160688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:28 compute-0 python3.9[160690]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:06:28 compute-0 sudo[160688]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:29 compute-0 sudo[160840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtutdqfhkwnvjzftucoycrycrdjgyppf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431988.6809487-34-157823495737784/AnsiballZ_file.py'
Oct 02 19:06:29 compute-0 sudo[160840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:29 compute-0 python3.9[160842]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:06:29 compute-0 sudo[160840]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:29 compute-0 sudo[160992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egcbpnadivelncxzeasmvwnsmvwzulkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431989.4674869-34-78528657409083/AnsiballZ_file.py'
Oct 02 19:06:29 compute-0 sudo[160992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:30 compute-0 python3.9[160994]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:06:30 compute-0 sudo[160992]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:30 compute-0 sudo[161144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kppcbqstzkkibceuacstawjkzgbahzgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431990.3235257-34-263492094719847/AnsiballZ_file.py'
Oct 02 19:06:30 compute-0 sudo[161144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:30 compute-0 python3.9[161146]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 19:06:30 compute-0 sudo[161144]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:31 compute-0 sudo[161296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdjctdcvzansrupxuhlftoylppsqvdwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431991.060241-34-182465378529131/AnsiballZ_file.py'
Oct 02 19:06:31 compute-0 sudo[161296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:31 compute-0 python3.9[161298]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:06:31 compute-0 sudo[161296]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:32 compute-0 sudo[161448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esmqetagouaiwwcswgjbesrnnetybiza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431991.8164613-70-252026858164002/AnsiballZ_stat.py'
Oct 02 19:06:32 compute-0 sudo[161448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:32 compute-0 python3.9[161450]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:06:32 compute-0 sudo[161448]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:33 compute-0 sudo[161602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohttoopcpbstgpjawezrwabkllcmexnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431992.811989-78-187687150561702/AnsiballZ_systemd.py'
Oct 02 19:06:33 compute-0 sudo[161602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:33 compute-0 python3.9[161604]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:06:33 compute-0 systemd[1]: Reloading.
Oct 02 19:06:34 compute-0 systemd-rc-local-generator[161631]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:06:34 compute-0 systemd-sysv-generator[161637]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:06:34 compute-0 sudo[161602]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:35 compute-0 sudo[161791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drmxdtujvvbezienjoxpuapngsyaymsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759431994.5275497-86-48073464257067/AnsiballZ_service_facts.py'
Oct 02 19:06:35 compute-0 sudo[161791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:35 compute-0 python3.9[161793]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:06:35 compute-0 network[161810]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:06:35 compute-0 network[161811]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:06:35 compute-0 network[161812]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:06:40 compute-0 sudo[161791]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:40 compute-0 sudo[162083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzcvarahqwqynxjzlotmyzrvczisclrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432000.5103512-94-6400664562882/AnsiballZ_systemd.py'
Oct 02 19:06:40 compute-0 sudo[162083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:41 compute-0 python3.9[162085]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:06:42 compute-0 systemd[1]: Reloading.
Oct 02 19:06:42 compute-0 systemd-rc-local-generator[162114]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:06:42 compute-0 systemd-sysv-generator[162118]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:06:42 compute-0 sudo[162083]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:43 compute-0 python3.9[162272]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:06:44 compute-0 sudo[162422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpcmxrghsdfznlhjpchbrlcjuborvbnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432003.558186-111-180249585361206/AnsiballZ_podman_container.py'
Oct 02 19:06:44 compute-0 sudo[162422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:44 compute-0 python3.9[162424]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 19:06:44 compute-0 podman[162461]: 2025-10-02 19:06:44.621138051 +0000 UTC m=+0.054432112 container create 09ded027106c8f4b0403a4863ba248c504f4761cec69bf8c81166db58a5f3ed7 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 19:06:44 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:06:44 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:06:44 compute-0 NetworkManager[52324]: <info>  [1759432004.6514] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/19)
Oct 02 19:06:44 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 02 19:06:44 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 19:06:44 compute-0 kernel: veth0: entered allmulticast mode
Oct 02 19:06:44 compute-0 kernel: veth0: entered promiscuous mode
Oct 02 19:06:44 compute-0 NetworkManager[52324]: <info>  [1759432004.6837] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/20)
Oct 02 19:06:44 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 02 19:06:44 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct 02 19:06:44 compute-0 systemd-udevd[162482]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:06:44 compute-0 systemd-udevd[162480]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:06:44 compute-0 podman[162461]: 2025-10-02 19:06:44.589090717 +0000 UTC m=+0.022384758 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 19:06:44 compute-0 NetworkManager[52324]: <info>  [1759432004.6882] device (veth0): carrier: link connected
Oct 02 19:06:44 compute-0 NetworkManager[52324]: <info>  [1759432004.6885] device (podman0): carrier: link connected
Oct 02 19:06:44 compute-0 NetworkManager[52324]: <info>  [1759432004.7101] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:06:44 compute-0 NetworkManager[52324]: <info>  [1759432004.7116] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:06:44 compute-0 NetworkManager[52324]: <info>  [1759432004.7132] device (podman0): Activation: starting connection 'podman0' (95bc8caf-5f9f-4ce8-a00c-155bedf17917)
Oct 02 19:06:44 compute-0 NetworkManager[52324]: <info>  [1759432004.7135] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 19:06:44 compute-0 NetworkManager[52324]: <info>  [1759432004.7142] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 19:06:44 compute-0 NetworkManager[52324]: <info>  [1759432004.7147] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 19:06:44 compute-0 NetworkManager[52324]: <info>  [1759432004.7150] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 19:06:44 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 19:06:44 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 19:06:44 compute-0 NetworkManager[52324]: <info>  [1759432004.7494] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 19:06:44 compute-0 NetworkManager[52324]: <info>  [1759432004.7497] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 19:06:44 compute-0 NetworkManager[52324]: <info>  [1759432004.7509] device (podman0): Activation: successful, device activated.
Oct 02 19:06:44 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 02 19:06:44 compute-0 systemd[1]: Started libpod-conmon-09ded027106c8f4b0403a4863ba248c504f4761cec69bf8c81166db58a5f3ed7.scope.
Oct 02 19:06:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:06:45 compute-0 podman[162461]: 2025-10-02 19:06:45.068885276 +0000 UTC m=+0.502179397 container init 09ded027106c8f4b0403a4863ba248c504f4761cec69bf8c81166db58a5f3ed7 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:06:45 compute-0 podman[162461]: 2025-10-02 19:06:45.077226298 +0000 UTC m=+0.510520349 container start 09ded027106c8f4b0403a4863ba248c504f4761cec69bf8c81166db58a5f3ed7 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:06:45 compute-0 iscsid_config[162618]: iqn.1994-05.com.redhat:ff0303ff443
Oct 02 19:06:45 compute-0 podman[162461]: 2025-10-02 19:06:45.081728249 +0000 UTC m=+0.515022280 container attach 09ded027106c8f4b0403a4863ba248c504f4761cec69bf8c81166db58a5f3ed7 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:06:45 compute-0 systemd[1]: libpod-09ded027106c8f4b0403a4863ba248c504f4761cec69bf8c81166db58a5f3ed7.scope: Deactivated successfully.
Oct 02 19:06:45 compute-0 podman[162461]: 2025-10-02 19:06:45.083887076 +0000 UTC m=+0.517181147 container died 09ded027106c8f4b0403a4863ba248c504f4761cec69bf8c81166db58a5f3ed7 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 19:06:45 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 19:06:45 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct 02 19:06:45 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct 02 19:06:45 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 19:06:45 compute-0 NetworkManager[52324]: <info>  [1759432005.1371] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:06:45 compute-0 systemd[1]: run-netns-netns\x2d7c4aac01\x2daf78\x2d9482\x2d654d\x2d0d3d88c0c740.mount: Deactivated successfully.
Oct 02 19:06:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09ded027106c8f4b0403a4863ba248c504f4761cec69bf8c81166db58a5f3ed7-userdata-shm.mount: Deactivated successfully.
Oct 02 19:06:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d1e605d27943761fb8e50e3b7ede0ac7c582ea75fed8c83e33bd5662be56daa-merged.mount: Deactivated successfully.
Oct 02 19:06:45 compute-0 podman[162461]: 2025-10-02 19:06:45.529749981 +0000 UTC m=+0.963044002 container remove 09ded027106c8f4b0403a4863ba248c504f4761cec69bf8c81166db58a5f3ed7 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:06:45 compute-0 systemd[1]: libpod-conmon-09ded027106c8f4b0403a4863ba248c504f4761cec69bf8c81166db58a5f3ed7.scope: Deactivated successfully.
Oct 02 19:06:45 compute-0 python3.9[162424]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct 02 19:06:45 compute-0 python3.9[162424]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 02 19:06:45 compute-0 sudo[162422]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:46 compute-0 sudo[162859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmvpibszgjfdwgulnlzglwqlrcmtisap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432005.8732135-119-207063310725862/AnsiballZ_stat.py'
Oct 02 19:06:46 compute-0 sudo[162859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:46 compute-0 python3.9[162861]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:46 compute-0 sudo[162859]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:47 compute-0 sudo[162982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqcifxmmwzwvowiiyawmromtlstswncz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432005.8732135-119-207063310725862/AnsiballZ_copy.py'
Oct 02 19:06:47 compute-0 sudo[162982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:47 compute-0 python3.9[162984]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432005.8732135-119-207063310725862/.source.iscsi _original_basename=.2ism0v5u follow=False checksum=2417fcfe68c9a7c392ae6882b80109b76632b892 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:47 compute-0 sudo[162982]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:06:47.439 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:06:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:06:47.442 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:06:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:06:47.442 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:06:47 compute-0 sudo[163134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvhqxlwzwqpraydxlgaafxovitdjvaec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432007.4729753-134-175787300302964/AnsiballZ_file.py'
Oct 02 19:06:47 compute-0 sudo[163134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:47 compute-0 python3.9[163136]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:48 compute-0 sudo[163134]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:48 compute-0 python3.9[163286]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:06:49 compute-0 sudo[163438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmigctcbyhudwsyqeerbxkyyqnojisom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432008.983216-151-242463603570412/AnsiballZ_lineinfile.py'
Oct 02 19:06:49 compute-0 sudo[163438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:49 compute-0 python3.9[163440]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:49 compute-0 sudo[163438]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:50 compute-0 sudo[163590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhemwjugcgnkecynhgpkngppriudwmoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432010.0133944-160-98487458743443/AnsiballZ_file.py'
Oct 02 19:06:50 compute-0 sudo[163590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:50 compute-0 python3.9[163592]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:06:50 compute-0 sudo[163590]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:51 compute-0 sudo[163742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfjawrgcztikvxjznmkvyurudeksaayx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432010.8401096-168-38190441305126/AnsiballZ_stat.py'
Oct 02 19:06:51 compute-0 sudo[163742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:51 compute-0 python3.9[163744]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:51 compute-0 sudo[163742]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:51 compute-0 sudo[163820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhircerhkqceiwbeztngaabauibqrzyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432010.8401096-168-38190441305126/AnsiballZ_file.py'
Oct 02 19:06:51 compute-0 sudo[163820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:51 compute-0 python3.9[163822]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:06:51 compute-0 sudo[163820]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:52 compute-0 sudo[164003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zylbrghollibwfdoqgvxkiahuwfkinbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432012.149348-168-188185587701636/AnsiballZ_stat.py'
Oct 02 19:06:52 compute-0 sudo[164003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:52 compute-0 podman[163946]: 2025-10-02 19:06:52.520075277 +0000 UTC m=+0.071638131 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:06:52 compute-0 podman[163947]: 2025-10-02 19:06:52.552989044 +0000 UTC m=+0.104051744 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:06:52 compute-0 python3.9[164011]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:52 compute-0 sudo[164003]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:53 compute-0 sudo[164094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noqurxtxvhlwgqovkxrcrzqgjggojztb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432012.149348-168-188185587701636/AnsiballZ_file.py'
Oct 02 19:06:53 compute-0 sudo[164094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:53 compute-0 python3.9[164096]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:06:53 compute-0 sudo[164094]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:53 compute-0 sudo[164246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpvrlvgiinaxbscjnvmdovzgxkkildfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432013.4372363-191-238946038152954/AnsiballZ_file.py'
Oct 02 19:06:53 compute-0 sudo[164246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:53 compute-0 python3.9[164248]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:53 compute-0 sudo[164246]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:54 compute-0 sudo[164398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwhywymntlvikuhwvqcqlcpfkmmgkpqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432014.177525-199-78505294787282/AnsiballZ_stat.py'
Oct 02 19:06:54 compute-0 sudo[164398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:54 compute-0 python3.9[164400]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:54 compute-0 sudo[164398]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:55 compute-0 sudo[164476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afdylditlzpmtesrpokqwcorpmqrrqjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432014.177525-199-78505294787282/AnsiballZ_file.py'
Oct 02 19:06:55 compute-0 sudo[164476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:55 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 19:06:55 compute-0 python3.9[164478]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:55 compute-0 sudo[164476]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:55 compute-0 sudo[164628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agvgmqgkkvmybgnftpcltytcwkuuxcbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432015.4447877-211-211285482010613/AnsiballZ_stat.py'
Oct 02 19:06:55 compute-0 sudo[164628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:55 compute-0 python3.9[164630]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:56 compute-0 sudo[164628]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:56 compute-0 sudo[164706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqwibqqprxiasltwzuhwiczvjnxlkirn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432015.4447877-211-211285482010613/AnsiballZ_file.py'
Oct 02 19:06:56 compute-0 sudo[164706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:56 compute-0 python3.9[164708]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:56 compute-0 sudo[164706]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:56 compute-0 sudo[164858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahaizrjmkdljwvahkvwgmvwilhwihxbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432016.6234233-223-9863108358153/AnsiballZ_systemd.py'
Oct 02 19:06:56 compute-0 sudo[164858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:57 compute-0 python3.9[164860]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:06:57 compute-0 systemd[1]: Reloading.
Oct 02 19:06:57 compute-0 systemd-sysv-generator[164888]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:06:57 compute-0 systemd-rc-local-generator[164883]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:06:57 compute-0 sudo[164858]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:58 compute-0 sudo[165047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geuhduvpqedqgpatibcbatdbgsbilpii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432017.844974-231-279787856526960/AnsiballZ_stat.py'
Oct 02 19:06:58 compute-0 sudo[165047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:58 compute-0 python3.9[165049]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:58 compute-0 sudo[165047]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:58 compute-0 sudo[165125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dndghjmvsgyyixqyrdlxddjfybkmuoks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432017.844974-231-279787856526960/AnsiballZ_file.py'
Oct 02 19:06:58 compute-0 sudo[165125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:58 compute-0 python3.9[165127]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:06:59 compute-0 sudo[165125]: pam_unix(sudo:session): session closed for user root
Oct 02 19:06:59 compute-0 sudo[165277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlmbtymtainnaaiqwfuhfxuayvpzkpwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432019.2128804-243-221276835581084/AnsiballZ_stat.py'
Oct 02 19:06:59 compute-0 sudo[165277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:06:59 compute-0 python3.9[165279]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:06:59 compute-0 sudo[165277]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:00 compute-0 sudo[165355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhzbiusteszgaxfaxdvtnfdkhtcorkfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432019.2128804-243-221276835581084/AnsiballZ_file.py'
Oct 02 19:07:00 compute-0 sudo[165355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:00 compute-0 python3.9[165357]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:00 compute-0 sudo[165355]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:00 compute-0 sudo[165507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxdpfnrcmtomntqwkewgqdtfbgeuufot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432020.4795249-255-31592453683955/AnsiballZ_systemd.py'
Oct 02 19:07:00 compute-0 sudo[165507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:01 compute-0 python3.9[165509]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:07:01 compute-0 systemd[1]: Reloading.
Oct 02 19:07:01 compute-0 systemd-sysv-generator[165539]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:07:01 compute-0 systemd-rc-local-generator[165534]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:07:01 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 19:07:01 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 19:07:01 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 19:07:01 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 19:07:01 compute-0 sudo[165507]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:02 compute-0 sudo[165700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsqyqczvmfvzsohzppuctbfftjznghdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432021.7971654-265-257451333991166/AnsiballZ_file.py'
Oct 02 19:07:02 compute-0 sudo[165700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:02 compute-0 python3.9[165702]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:07:02 compute-0 sudo[165700]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:03 compute-0 sudo[165852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmjxehounzmcqdudxoixwumpppkuuzmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432022.7175672-273-96813113106802/AnsiballZ_stat.py'
Oct 02 19:07:03 compute-0 sudo[165852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:03 compute-0 python3.9[165854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:07:03 compute-0 sudo[165852]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:03 compute-0 sudo[165975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdfqzmbtwmgrlikvmcvquftizqxaumua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432022.7175672-273-96813113106802/AnsiballZ_copy.py'
Oct 02 19:07:03 compute-0 sudo[165975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:03 compute-0 python3.9[165977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432022.7175672-273-96813113106802/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:07:03 compute-0 sudo[165975]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:04 compute-0 sudo[166127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhxpuzpmnltftoypdmnkfimfppezffhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432024.3155267-290-59481827350480/AnsiballZ_file.py'
Oct 02 19:07:04 compute-0 sudo[166127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:04 compute-0 python3.9[166129]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:07:04 compute-0 sudo[166127]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:05 compute-0 sudo[166279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rubbkikjmjvaopyhwxrrjmikhbfauohd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432025.139155-298-257672458455000/AnsiballZ_stat.py'
Oct 02 19:07:05 compute-0 sudo[166279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:05 compute-0 python3.9[166281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:07:05 compute-0 sudo[166279]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:06 compute-0 sudo[166402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tymyczfpjncamnebbhskdcelclyjuqxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432025.139155-298-257672458455000/AnsiballZ_copy.py'
Oct 02 19:07:06 compute-0 sudo[166402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:06 compute-0 python3.9[166404]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432025.139155-298-257672458455000/.source.json _original_basename=.ia54nroi follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:06 compute-0 sudo[166402]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:06 compute-0 sudo[166554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-netsclpmtrrscsgsiiznhneozuovsfwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432026.6510396-313-89931483598406/AnsiballZ_file.py'
Oct 02 19:07:06 compute-0 sudo[166554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:07 compute-0 python3.9[166556]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:07 compute-0 sudo[166554]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:07 compute-0 sudo[166706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlvxkrztaiyiqwdaqubecybknkewisdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432027.42447-321-5851492742607/AnsiballZ_stat.py'
Oct 02 19:07:07 compute-0 sudo[166706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:08 compute-0 sudo[166706]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:08 compute-0 sudo[166829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnnnanrnmbftyzjidavnlvpuzwiueptw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432027.42447-321-5851492742607/AnsiballZ_copy.py'
Oct 02 19:07:08 compute-0 sudo[166829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:08 compute-0 sudo[166829]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:09 compute-0 sudo[166981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exfpajjcmolmaubtqqvgsudqcakcqrsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432029.085388-338-28818419232787/AnsiballZ_container_config_data.py'
Oct 02 19:07:09 compute-0 sudo[166981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:09 compute-0 python3.9[166983]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 02 19:07:09 compute-0 sudo[166981]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:10 compute-0 sudo[167133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guqkobcqxqmhoidcuoynlvfwhbjblgyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432030.184529-347-41755341419668/AnsiballZ_container_config_hash.py'
Oct 02 19:07:10 compute-0 sudo[167133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:10 compute-0 python3.9[167135]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:07:10 compute-0 sudo[167133]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:11 compute-0 sudo[167285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqzzxdwxjxdbggwujmwipyjkbwonmdpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432031.3070028-356-144655385977083/AnsiballZ_podman_container_info.py'
Oct 02 19:07:11 compute-0 sudo[167285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:12 compute-0 python3.9[167287]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 19:07:12 compute-0 sudo[167285]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:13 compute-0 sudo[167462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gflkfxkokfuoveslskleexnxpjdcmgjq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432032.888544-369-85856833179616/AnsiballZ_edpm_container_manage.py'
Oct 02 19:07:13 compute-0 sudo[167462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:13 compute-0 python3[167464]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:07:14 compute-0 podman[167499]: 2025-10-02 19:07:14.014108096 +0000 UTC m=+0.055854190 container create e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=iscsid, tcib_managed=true)
Oct 02 19:07:14 compute-0 podman[167499]: 2025-10-02 19:07:13.983158151 +0000 UTC m=+0.024904285 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 19:07:14 compute-0 python3[167464]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 19:07:14 compute-0 sudo[167462]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:14 compute-0 sudo[167687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raaqrzgfpxrvuzbdrkpkirkigtywypqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432034.3800762-377-162736763348880/AnsiballZ_stat.py'
Oct 02 19:07:14 compute-0 sudo[167687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:14 compute-0 python3.9[167689]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:07:14 compute-0 sudo[167687]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:15 compute-0 sudo[167841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jehvadskzzughuyjyumjlwfsjlqcvhwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432035.1515079-386-253982692479627/AnsiballZ_file.py'
Oct 02 19:07:15 compute-0 sudo[167841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:15 compute-0 python3.9[167843]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:15 compute-0 sudo[167841]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:15 compute-0 sudo[167917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shbfinnvqzevymgduimiubyirmeykhbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432035.1515079-386-253982692479627/AnsiballZ_stat.py'
Oct 02 19:07:15 compute-0 sudo[167917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:16 compute-0 python3.9[167919]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:07:16 compute-0 sudo[167917]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:16 compute-0 sudo[168068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhgxhbwapjrnifeqpkroscdaendrfubt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432036.2233217-386-181201252826802/AnsiballZ_copy.py'
Oct 02 19:07:16 compute-0 sudo[168068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:17 compute-0 python3.9[168070]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432036.2233217-386-181201252826802/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:17 compute-0 sudo[168068]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:17 compute-0 sudo[168144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jftjycpcdcshgcmdmkhrktqpluneetyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432036.2233217-386-181201252826802/AnsiballZ_systemd.py'
Oct 02 19:07:17 compute-0 sudo[168144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:17 compute-0 python3.9[168146]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:07:17 compute-0 systemd[1]: Reloading.
Oct 02 19:07:17 compute-0 systemd-rc-local-generator[168174]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:07:17 compute-0 systemd-sysv-generator[168178]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:07:18 compute-0 sudo[168144]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:18 compute-0 sudo[168255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uclcbhnagrcnellxwiuuqjvmgnqngddy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432036.2233217-386-181201252826802/AnsiballZ_systemd.py'
Oct 02 19:07:18 compute-0 sudo[168255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:18 compute-0 python3.9[168257]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:07:18 compute-0 systemd[1]: Reloading.
Oct 02 19:07:18 compute-0 systemd-rc-local-generator[168288]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:07:18 compute-0 systemd-sysv-generator[168291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:07:19 compute-0 systemd[1]: Starting iscsid container...
Oct 02 19:07:19 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c03b26b190a6a7ba5262787daa60c8d988b307f4f6110be56ed38446cd63567/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 19:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c03b26b190a6a7ba5262787daa60c8d988b307f4f6110be56ed38446cd63567/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 02 19:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c03b26b190a6a7ba5262787daa60c8d988b307f4f6110be56ed38446cd63567/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 19:07:19 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd.
Oct 02 19:07:19 compute-0 podman[168298]: 2025-10-02 19:07:19.247296095 +0000 UTC m=+0.132125963 container init e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 19:07:19 compute-0 iscsid[168313]: + sudo -E kolla_set_configs
Oct 02 19:07:19 compute-0 podman[168298]: 2025-10-02 19:07:19.282972026 +0000 UTC m=+0.167801904 container start e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 02 19:07:19 compute-0 podman[168298]: iscsid
Oct 02 19:07:19 compute-0 sudo[168319]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:07:19 compute-0 systemd[1]: Started iscsid container.
Oct 02 19:07:19 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 02 19:07:19 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 02 19:07:19 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 02 19:07:19 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 02 19:07:19 compute-0 sudo[168255]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:19 compute-0 systemd[168335]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 02 19:07:19 compute-0 podman[168321]: 2025-10-02 19:07:19.381196094 +0000 UTC m=+0.088685055 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 19:07:19 compute-0 systemd[1]: e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd-5cb3eceb4883c64.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:07:19 compute-0 systemd[1]: e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd-5cb3eceb4883c64.service: Failed with result 'exit-code'.
Oct 02 19:07:19 compute-0 systemd[168335]: Queued start job for default target Main User Target.
Oct 02 19:07:19 compute-0 systemd[168335]: Created slice User Application Slice.
Oct 02 19:07:19 compute-0 systemd[168335]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 02 19:07:19 compute-0 systemd[168335]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 19:07:19 compute-0 systemd[168335]: Reached target Paths.
Oct 02 19:07:19 compute-0 systemd[168335]: Reached target Timers.
Oct 02 19:07:19 compute-0 systemd[168335]: Starting D-Bus User Message Bus Socket...
Oct 02 19:07:19 compute-0 systemd[168335]: Starting Create User's Volatile Files and Directories...
Oct 02 19:07:19 compute-0 systemd[168335]: Listening on D-Bus User Message Bus Socket.
Oct 02 19:07:19 compute-0 systemd[168335]: Reached target Sockets.
Oct 02 19:07:19 compute-0 systemd[168335]: Finished Create User's Volatile Files and Directories.
Oct 02 19:07:19 compute-0 systemd[168335]: Reached target Basic System.
Oct 02 19:07:19 compute-0 systemd[168335]: Reached target Main User Target.
Oct 02 19:07:19 compute-0 systemd[168335]: Startup finished in 134ms.
Oct 02 19:07:19 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 02 19:07:19 compute-0 systemd[1]: Started Session c3 of User root.
Oct 02 19:07:19 compute-0 sudo[168319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 19:07:19 compute-0 iscsid[168313]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:07:19 compute-0 iscsid[168313]: INFO:__main__:Validating config file
Oct 02 19:07:19 compute-0 iscsid[168313]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:07:19 compute-0 iscsid[168313]: INFO:__main__:Writing out command to execute
Oct 02 19:07:19 compute-0 sudo[168319]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:19 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 02 19:07:19 compute-0 iscsid[168313]: ++ cat /run_command
Oct 02 19:07:19 compute-0 iscsid[168313]: + CMD='/usr/sbin/iscsid -f'
Oct 02 19:07:19 compute-0 iscsid[168313]: + ARGS=
Oct 02 19:07:19 compute-0 iscsid[168313]: + sudo kolla_copy_cacerts
Oct 02 19:07:19 compute-0 sudo[168432]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:07:19 compute-0 systemd[1]: Started Session c4 of User root.
Oct 02 19:07:19 compute-0 sudo[168432]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 19:07:19 compute-0 sudo[168432]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:19 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 02 19:07:19 compute-0 iscsid[168313]: + [[ ! -n '' ]]
Oct 02 19:07:19 compute-0 iscsid[168313]: + . kolla_extend_start
Oct 02 19:07:19 compute-0 iscsid[168313]: Running command: '/usr/sbin/iscsid -f'
Oct 02 19:07:19 compute-0 iscsid[168313]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 02 19:07:19 compute-0 iscsid[168313]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 02 19:07:19 compute-0 iscsid[168313]: + umask 0022
Oct 02 19:07:19 compute-0 iscsid[168313]: + exec /usr/sbin/iscsid -f
Oct 02 19:07:19 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct 02 19:07:20 compute-0 python3.9[168519]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:07:20 compute-0 sudo[168669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbuicuudlgxxbzceisteyaefcdbsrtol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432040.221527-423-100661039865204/AnsiballZ_file.py'
Oct 02 19:07:20 compute-0 sudo[168669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:20 compute-0 python3.9[168671]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:20 compute-0 sudo[168669]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:21 compute-0 sudo[168821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itvdtxuviuwqlrmikadzkqqxhufttdpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432041.1695979-434-78041280414353/AnsiballZ_service_facts.py'
Oct 02 19:07:21 compute-0 sudo[168821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:21 compute-0 python3.9[168823]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:07:21 compute-0 network[168840]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:07:21 compute-0 network[168841]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:07:21 compute-0 network[168842]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:07:22 compute-0 podman[168848]: 2025-10-02 19:07:22.737904267 +0000 UTC m=+0.065440245 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:07:22 compute-0 podman[168850]: 2025-10-02 19:07:22.764162117 +0000 UTC m=+0.096838282 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:07:26 compute-0 sudo[168821]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:27 compute-0 sudo[169158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsphcijkigjpddswahrtkpmsygfisfmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432046.658916-444-221755468309576/AnsiballZ_file.py'
Oct 02 19:07:27 compute-0 sudo[169158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:27 compute-0 python3.9[169160]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 19:07:27 compute-0 sudo[169158]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:27 compute-0 sudo[169311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijnhbilcmxhqduwshmumuokmwzfqlrwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432047.4855354-452-94488810268617/AnsiballZ_modprobe.py'
Oct 02 19:07:27 compute-0 sudo[169311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:28 compute-0 python3.9[169313]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 02 19:07:28 compute-0 sudo[169311]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:28 compute-0 sudo[169467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwnaqbhmruxpyhgfkmnzwuluoxsehflq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432048.3790205-460-195829074248057/AnsiballZ_stat.py'
Oct 02 19:07:28 compute-0 sudo[169467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:28 compute-0 python3.9[169469]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:07:28 compute-0 sudo[169467]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:29 compute-0 sudo[169590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpvlbxhpidoqnhkfsbcuhkixxbqzphzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432048.3790205-460-195829074248057/AnsiballZ_copy.py'
Oct 02 19:07:29 compute-0 sudo[169590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:29 compute-0 python3.9[169592]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432048.3790205-460-195829074248057/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:29 compute-0 sudo[169590]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:29 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 02 19:07:29 compute-0 systemd[168335]: Activating special unit Exit the Session...
Oct 02 19:07:29 compute-0 systemd[168335]: Stopped target Main User Target.
Oct 02 19:07:29 compute-0 systemd[168335]: Stopped target Basic System.
Oct 02 19:07:29 compute-0 systemd[168335]: Stopped target Paths.
Oct 02 19:07:29 compute-0 systemd[168335]: Stopped target Sockets.
Oct 02 19:07:29 compute-0 systemd[168335]: Stopped target Timers.
Oct 02 19:07:29 compute-0 systemd[168335]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 19:07:29 compute-0 systemd[168335]: Closed D-Bus User Message Bus Socket.
Oct 02 19:07:29 compute-0 systemd[168335]: Stopped Create User's Volatile Files and Directories.
Oct 02 19:07:29 compute-0 systemd[168335]: Removed slice User Application Slice.
Oct 02 19:07:29 compute-0 systemd[168335]: Reached target Shutdown.
Oct 02 19:07:29 compute-0 systemd[168335]: Finished Exit the Session.
Oct 02 19:07:29 compute-0 systemd[168335]: Reached target Exit the Session.
Oct 02 19:07:29 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 02 19:07:29 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 02 19:07:29 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 02 19:07:29 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 02 19:07:29 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 02 19:07:29 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 02 19:07:29 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 02 19:07:30 compute-0 sudo[169743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrtayjrxiznjdoczqodvxjhrixszwmds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432049.7648368-476-97030282561249/AnsiballZ_lineinfile.py'
Oct 02 19:07:30 compute-0 sudo[169743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:30 compute-0 python3.9[169745]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:30 compute-0 sudo[169743]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:30 compute-0 sudo[169895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msvtaxdeetdepobfdaxserapckafjugv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432050.555115-484-2407815021581/AnsiballZ_systemd.py'
Oct 02 19:07:30 compute-0 sudo[169895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:31 compute-0 python3.9[169897]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:07:31 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 02 19:07:31 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 02 19:07:31 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 02 19:07:31 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 02 19:07:31 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 02 19:07:31 compute-0 sudo[169895]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:31 compute-0 sudo[170051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykjeccrctbtxofsjlqkbaprokhsqsitt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432051.525248-492-113695210461249/AnsiballZ_file.py'
Oct 02 19:07:31 compute-0 sudo[170051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:32 compute-0 python3.9[170053]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:07:32 compute-0 sudo[170051]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:32 compute-0 sudo[170203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxtfnyjbseuabeevqtmwowmrvoizpagi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432052.396308-501-153778667709205/AnsiballZ_stat.py'
Oct 02 19:07:32 compute-0 sudo[170203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:32 compute-0 python3.9[170205]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:07:32 compute-0 sudo[170203]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:33 compute-0 sudo[170355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-codxovjbftjqtzilzfeesrgfjakzzloj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432053.236318-510-273735478453488/AnsiballZ_stat.py'
Oct 02 19:07:33 compute-0 sudo[170355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:33 compute-0 python3.9[170357]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:07:33 compute-0 sudo[170355]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:34 compute-0 sudo[170507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqhofnlbmvptoothczasacdpaeuxtktr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432053.9703455-518-91092328695155/AnsiballZ_stat.py'
Oct 02 19:07:34 compute-0 sudo[170507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:34 compute-0 python3.9[170509]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:07:34 compute-0 sudo[170507]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:35 compute-0 sudo[170630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjvleogrzkddngshwgpbvpoazlltvjmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432053.9703455-518-91092328695155/AnsiballZ_copy.py'
Oct 02 19:07:35 compute-0 sudo[170630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:35 compute-0 python3.9[170632]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432053.9703455-518-91092328695155/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:35 compute-0 sudo[170630]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:35 compute-0 sudo[170782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfjikgyinhmrdobncxmrczhjosvzfsoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432055.4759185-533-171917809308957/AnsiballZ_command.py'
Oct 02 19:07:35 compute-0 sudo[170782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:36 compute-0 python3.9[170784]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:07:36 compute-0 sudo[170782]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:36 compute-0 sudo[170935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofznihgdavvkfqtpatdmtumowmnzgvpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432056.339573-541-107492717597145/AnsiballZ_lineinfile.py'
Oct 02 19:07:36 compute-0 sudo[170935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:36 compute-0 python3.9[170937]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:36 compute-0 sudo[170935]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:37 compute-0 sudo[171087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huxyghtyqncpqxisbohbhjyvhkhcbmgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432057.062495-549-110789725065909/AnsiballZ_replace.py'
Oct 02 19:07:37 compute-0 sudo[171087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:37 compute-0 python3.9[171089]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:37 compute-0 sudo[171087]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:38 compute-0 sudo[171239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qstidoissjwvcixacapjkweicmiowmkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432057.9808412-557-127164031489487/AnsiballZ_replace.py'
Oct 02 19:07:38 compute-0 sudo[171239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:38 compute-0 python3.9[171241]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:38 compute-0 sudo[171239]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:39 compute-0 sudo[171391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkrpfpyqupfzfgtpgzqqdnavttlafnbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432058.828199-566-45395184195530/AnsiballZ_lineinfile.py'
Oct 02 19:07:39 compute-0 sudo[171391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:39 compute-0 python3.9[171393]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:39 compute-0 sudo[171391]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:39 compute-0 sudo[171543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwmsrjjbuoiseddofjsrntgcqorrracq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432059.5929155-566-144786177787423/AnsiballZ_lineinfile.py'
Oct 02 19:07:39 compute-0 sudo[171543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:40 compute-0 python3.9[171545]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:40 compute-0 sudo[171543]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:40 compute-0 sudo[171695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqsjdhlcupftpjqtoguffwtjyemgvffo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432060.4080966-566-101339547507793/AnsiballZ_lineinfile.py'
Oct 02 19:07:40 compute-0 sudo[171695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:41 compute-0 python3.9[171697]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:41 compute-0 sudo[171695]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:41 compute-0 sudo[171847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsdtnfjczrsstlzvbuhxuybpqoekswgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432061.2306929-566-262250201535607/AnsiballZ_lineinfile.py'
Oct 02 19:07:41 compute-0 sudo[171847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:41 compute-0 python3.9[171849]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:41 compute-0 sudo[171847]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:42 compute-0 sudo[171999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiahiplosfnugdafowzrsylbbdijxqcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432062.0107987-595-27243700819588/AnsiballZ_stat.py'
Oct 02 19:07:42 compute-0 sudo[171999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:42 compute-0 python3.9[172001]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:07:42 compute-0 sudo[171999]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:43 compute-0 sudo[172153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puioishrschpkgwakxqohvnksyzovqwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432062.8304822-603-265320898095979/AnsiballZ_file.py'
Oct 02 19:07:43 compute-0 sudo[172153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:43 compute-0 python3.9[172155]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:43 compute-0 sudo[172153]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:44 compute-0 sudo[172305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzukhrxvcxzxpkpwoptyhafmpblahykg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432063.6798544-612-87105128008998/AnsiballZ_file.py'
Oct 02 19:07:44 compute-0 sudo[172305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:44 compute-0 python3.9[172307]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:07:44 compute-0 sudo[172305]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:44 compute-0 sudo[172457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aprpzaszarxgebaeiuegssktxodlmelv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432064.4814038-620-7329440500849/AnsiballZ_stat.py'
Oct 02 19:07:44 compute-0 sudo[172457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:45 compute-0 python3.9[172459]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:07:45 compute-0 sudo[172457]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:45 compute-0 sudo[172535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmavigveodjphndbcdqkzyvsrthaeuqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432064.4814038-620-7329440500849/AnsiballZ_file.py'
Oct 02 19:07:45 compute-0 sudo[172535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:45 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 02 19:07:45 compute-0 python3.9[172537]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:07:45 compute-0 sudo[172535]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:46 compute-0 sudo[172688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnmfjyofqwlekjbbyvmlhsbwfflqzdpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432065.8644702-620-129889294071675/AnsiballZ_stat.py'
Oct 02 19:07:46 compute-0 sudo[172688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:46 compute-0 python3.9[172690]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:07:46 compute-0 sudo[172688]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:46 compute-0 sudo[172766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaegbbkifktdlzovqrdakprgxbcltnoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432065.8644702-620-129889294071675/AnsiballZ_file.py'
Oct 02 19:07:46 compute-0 sudo[172766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:46 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 19:07:46 compute-0 python3.9[172768]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:07:47 compute-0 sudo[172766]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:07:47.440 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:07:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:07:47.441 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:07:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:07:47.441 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:07:47 compute-0 sudo[172919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euwbfkkxmtcsjkrlziazwxskfawjvffd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432067.2448356-643-221153831001365/AnsiballZ_file.py'
Oct 02 19:07:47 compute-0 sudo[172919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:47 compute-0 python3.9[172921]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:47 compute-0 sudo[172919]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:48 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 02 19:07:48 compute-0 sudo[173072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebjhyvwnwdoiymtovzfdysygegmowhyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432067.9588058-651-67344777452902/AnsiballZ_stat.py'
Oct 02 19:07:48 compute-0 sudo[173072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:48 compute-0 python3.9[173074]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:07:48 compute-0 sudo[173072]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:48 compute-0 sudo[173150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngkdjzfjgqioykhyenuhuvibjfmrdwwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432067.9588058-651-67344777452902/AnsiballZ_file.py'
Oct 02 19:07:48 compute-0 sudo[173150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:49 compute-0 python3.9[173152]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:49 compute-0 sudo[173150]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:49 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 02 19:07:49 compute-0 podman[173237]: 2025-10-02 19:07:49.569969411 +0000 UTC m=+0.082221458 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:07:49 compute-0 sudo[173324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onuqgpoirboejzqzjyydjlvicjburkac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432069.3378448-663-276568265368254/AnsiballZ_stat.py'
Oct 02 19:07:49 compute-0 sudo[173324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:49 compute-0 python3.9[173326]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:07:49 compute-0 sudo[173324]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:50 compute-0 sudo[173402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwokedivdxeazmhzabvewafgmeucjdob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432069.3378448-663-276568265368254/AnsiballZ_file.py'
Oct 02 19:07:50 compute-0 sudo[173402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:50 compute-0 python3.9[173404]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:50 compute-0 sudo[173402]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:50 compute-0 sudo[173554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjehuezpadsdpayhkuyqikzupognjrna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432070.6522048-675-113187811495939/AnsiballZ_systemd.py'
Oct 02 19:07:50 compute-0 sudo[173554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:51 compute-0 python3.9[173556]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:07:51 compute-0 systemd[1]: Reloading.
Oct 02 19:07:51 compute-0 systemd-rc-local-generator[173585]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:07:51 compute-0 systemd-sysv-generator[173588]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:07:51 compute-0 sudo[173554]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:52 compute-0 sudo[173744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wludloxdrnuyytgnobxonyckeqokttkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432071.8766785-683-191238997522533/AnsiballZ_stat.py'
Oct 02 19:07:52 compute-0 sudo[173744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:52 compute-0 python3.9[173746]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:07:52 compute-0 sudo[173744]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:52 compute-0 sudo[173822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uglzuvcoxqonhnhimnixxitrwiswxhws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432071.8766785-683-191238997522533/AnsiballZ_file.py'
Oct 02 19:07:52 compute-0 sudo[173822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:52 compute-0 python3.9[173824]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:52 compute-0 sudo[173822]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:53 compute-0 podman[173948]: 2025-10-02 19:07:53.471326029 +0000 UTC m=+0.062622875 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true)
Oct 02 19:07:53 compute-0 sudo[174004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojzsbjeyytwvhffoqdwixsgjjeodkisv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432073.1043022-695-206672307877726/AnsiballZ_stat.py'
Oct 02 19:07:53 compute-0 sudo[174004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:53 compute-0 podman[173949]: 2025-10-02 19:07:53.546696593 +0000 UTC m=+0.125617268 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:07:53 compute-0 python3.9[174008]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:07:53 compute-0 sudo[174004]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:54 compute-0 sudo[174095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckzcfkuywnyfsoaxvxkdywxxniyzeide ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432073.1043022-695-206672307877726/AnsiballZ_file.py'
Oct 02 19:07:54 compute-0 sudo[174095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:54 compute-0 python3.9[174097]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:07:54 compute-0 sudo[174095]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:54 compute-0 sudo[174247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhxbcoxbmavupioevysrbvfczwzrjzzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432074.5609918-707-190673542472752/AnsiballZ_systemd.py'
Oct 02 19:07:54 compute-0 sudo[174247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:55 compute-0 python3.9[174249]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:07:55 compute-0 systemd[1]: Reloading.
Oct 02 19:07:55 compute-0 systemd-rc-local-generator[174276]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:07:55 compute-0 systemd-sysv-generator[174280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:07:55 compute-0 systemd[1]: Starting Create netns directory...
Oct 02 19:07:55 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 19:07:55 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 19:07:55 compute-0 systemd[1]: Finished Create netns directory.
Oct 02 19:07:55 compute-0 sudo[174247]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:56 compute-0 sudo[174440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iozhtoldlpdnttqxgtcnkgrmruncsxcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432076.053337-717-199280269111492/AnsiballZ_file.py'
Oct 02 19:07:56 compute-0 sudo[174440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:56 compute-0 python3.9[174442]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:07:56 compute-0 sudo[174440]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:57 compute-0 sudo[174592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbomciyxpvmhuoernrbjrwvtdnfiuybg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432076.844745-725-211776841441681/AnsiballZ_stat.py'
Oct 02 19:07:57 compute-0 sudo[174592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:57 compute-0 python3.9[174594]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:07:57 compute-0 sudo[174592]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:58 compute-0 sudo[174715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zixxtvzzbuujnbdzejwktjzyxuptaaco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432076.844745-725-211776841441681/AnsiballZ_copy.py'
Oct 02 19:07:58 compute-0 sudo[174715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:58 compute-0 python3.9[174717]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432076.844745-725-211776841441681/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:07:58 compute-0 sudo[174715]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:59 compute-0 sudo[174867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieltmnqspggbqyeiiezievzmyzxarggb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432078.6374428-742-279620339478564/AnsiballZ_file.py'
Oct 02 19:07:59 compute-0 sudo[174867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:07:59 compute-0 python3.9[174869]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:07:59 compute-0 sudo[174867]: pam_unix(sudo:session): session closed for user root
Oct 02 19:07:59 compute-0 sudo[175019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okxdotdqbnksrsixevctmborviizbnwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432079.475226-750-271593908877032/AnsiballZ_stat.py'
Oct 02 19:07:59 compute-0 sudo[175019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:00 compute-0 python3.9[175021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:08:00 compute-0 sudo[175019]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:00 compute-0 sudo[175142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdvlopefqwihnlygpwbtnijppwnktjqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432079.475226-750-271593908877032/AnsiballZ_copy.py'
Oct 02 19:08:00 compute-0 sudo[175142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:00 compute-0 python3.9[175144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432079.475226-750-271593908877032/.source.json _original_basename=.grtwue4k follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:00 compute-0 sudo[175142]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:01 compute-0 sudo[175294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbrgkvwdcninfpvofnglnckvkugptzrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432081.0799077-765-248133615304721/AnsiballZ_file.py'
Oct 02 19:08:01 compute-0 sudo[175294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:01 compute-0 python3.9[175296]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:01 compute-0 sudo[175294]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:02 compute-0 sudo[175446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crdjptvkxmzqrxomywtujgrhozbojkzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432081.9179635-773-135566100515807/AnsiballZ_stat.py'
Oct 02 19:08:02 compute-0 sudo[175446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:02 compute-0 sudo[175446]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:02 compute-0 sudo[175569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dofwejzhufdkssxocqcbgkcyghvtbjnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432081.9179635-773-135566100515807/AnsiballZ_copy.py'
Oct 02 19:08:02 compute-0 sudo[175569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:02 compute-0 sudo[175569]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:03 compute-0 sudo[175721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppfmdpwglygwxcgixlbbxudfletrnyha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432083.32033-790-273196716629770/AnsiballZ_container_config_data.py'
Oct 02 19:08:03 compute-0 sudo[175721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:03 compute-0 python3.9[175723]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 02 19:08:03 compute-0 sudo[175721]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:04 compute-0 sudo[175873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wijxndndprisqrvdfzwfjuvvyyxyjgsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432084.33234-799-207534257865273/AnsiballZ_container_config_hash.py'
Oct 02 19:08:04 compute-0 sudo[175873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:04 compute-0 python3.9[175875]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:08:04 compute-0 sudo[175873]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:05 compute-0 sudo[176025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnmdchxsvvbkdtbgzpdizxwbrfviusje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432085.2219515-808-276290273124127/AnsiballZ_podman_container_info.py'
Oct 02 19:08:05 compute-0 sudo[176025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:05 compute-0 python3.9[176027]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 19:08:06 compute-0 sudo[176025]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:07 compute-0 sudo[176203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivdyvoeknlsmznamodulzwomxujonqne ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432086.635412-821-241580316149579/AnsiballZ_edpm_container_manage.py'
Oct 02 19:08:07 compute-0 sudo[176203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:07 compute-0 python3[176205]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:08:07 compute-0 podman[176241]: 2025-10-02 19:08:07.563600064 +0000 UTC m=+0.065572363 container create d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:08:07 compute-0 podman[176241]: 2025-10-02 19:08:07.525725262 +0000 UTC m=+0.027697601 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 19:08:07 compute-0 python3[176205]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 19:08:07 compute-0 sudo[176203]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:08 compute-0 sudo[176429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogscrgpsvimtvxtyfumrpcwhinrusbpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432087.9885-829-275063142074293/AnsiballZ_stat.py'
Oct 02 19:08:08 compute-0 sudo[176429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:08 compute-0 python3.9[176431]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:08:08 compute-0 sudo[176429]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:09 compute-0 sudo[176583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryiskifqrfpctjsbcxnowqjuovcczoxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432088.9727757-838-17722141339952/AnsiballZ_file.py'
Oct 02 19:08:09 compute-0 sudo[176583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:09 compute-0 python3.9[176585]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:09 compute-0 sudo[176583]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:09 compute-0 sudo[176659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sukmcdxhxnsyqhzyjpjwjkxzjxnartbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432088.9727757-838-17722141339952/AnsiballZ_stat.py'
Oct 02 19:08:09 compute-0 sudo[176659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:10 compute-0 python3.9[176661]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:08:10 compute-0 sudo[176659]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:10 compute-0 sudo[176810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mschsyufegircyqotkixjfiqqbpoyufl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432090.1777396-838-236616590390720/AnsiballZ_copy.py'
Oct 02 19:08:10 compute-0 sudo[176810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:10 compute-0 python3.9[176812]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432090.1777396-838-236616590390720/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:11 compute-0 sudo[176810]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:11 compute-0 sudo[176886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qygvtyozhvplasshfoqzpyaicctxvsxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432090.1777396-838-236616590390720/AnsiballZ_systemd.py'
Oct 02 19:08:11 compute-0 sudo[176886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:11 compute-0 python3.9[176888]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:08:11 compute-0 systemd[1]: Reloading.
Oct 02 19:08:11 compute-0 systemd-rc-local-generator[176915]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:08:11 compute-0 systemd-sysv-generator[176918]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:08:11 compute-0 sudo[176886]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:12 compute-0 sudo[176996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdnoasxtuvmctwitpfqrtkjtnmcsexsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432090.1777396-838-236616590390720/AnsiballZ_systemd.py'
Oct 02 19:08:12 compute-0 sudo[176996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:12 compute-0 python3.9[176998]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:08:12 compute-0 systemd[1]: Reloading.
Oct 02 19:08:12 compute-0 systemd-rc-local-generator[177022]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:08:12 compute-0 systemd-sysv-generator[177025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:08:13 compute-0 systemd[1]: Starting multipathd container...
Oct 02 19:08:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12768173cc8f15511cc16a26b5166b739b5831581e290fbc6d48eb7cb82faf2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12768173cc8f15511cc16a26b5166b739b5831581e290fbc6d48eb7cb82faf2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:13 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c.
Oct 02 19:08:13 compute-0 podman[177038]: 2025-10-02 19:08:13.224634976 +0000 UTC m=+0.125821424 container init d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:08:13 compute-0 multipathd[177054]: + sudo -E kolla_set_configs
Oct 02 19:08:13 compute-0 podman[177038]: 2025-10-02 19:08:13.254894374 +0000 UTC m=+0.156080732 container start d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:08:13 compute-0 sudo[177061]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:08:13 compute-0 podman[177038]: multipathd
Oct 02 19:08:13 compute-0 sudo[177061]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:08:13 compute-0 sudo[177061]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 19:08:13 compute-0 systemd[1]: Started multipathd container.
Oct 02 19:08:13 compute-0 sudo[176996]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:13 compute-0 podman[177060]: 2025-10-02 19:08:13.327229558 +0000 UTC m=+0.062437530 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:08:13 compute-0 multipathd[177054]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:08:13 compute-0 multipathd[177054]: INFO:__main__:Validating config file
Oct 02 19:08:13 compute-0 multipathd[177054]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:08:13 compute-0 multipathd[177054]: INFO:__main__:Writing out command to execute
Oct 02 19:08:13 compute-0 systemd[1]: d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c-221279ec92fefef7.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:08:13 compute-0 systemd[1]: d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c-221279ec92fefef7.service: Failed with result 'exit-code'.
Oct 02 19:08:13 compute-0 sudo[177061]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:13 compute-0 multipathd[177054]: ++ cat /run_command
Oct 02 19:08:13 compute-0 multipathd[177054]: + CMD='/usr/sbin/multipathd -d'
Oct 02 19:08:13 compute-0 multipathd[177054]: + ARGS=
Oct 02 19:08:13 compute-0 multipathd[177054]: + sudo kolla_copy_cacerts
Oct 02 19:08:13 compute-0 sudo[177091]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:08:13 compute-0 sudo[177091]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:08:13 compute-0 sudo[177091]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 19:08:13 compute-0 sudo[177091]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:13 compute-0 multipathd[177054]: + [[ ! -n '' ]]
Oct 02 19:08:13 compute-0 multipathd[177054]: + . kolla_extend_start
Oct 02 19:08:13 compute-0 multipathd[177054]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 02 19:08:13 compute-0 multipathd[177054]: Running command: '/usr/sbin/multipathd -d'
Oct 02 19:08:13 compute-0 multipathd[177054]: + umask 0022
Oct 02 19:08:13 compute-0 multipathd[177054]: + exec /usr/sbin/multipathd -d
Oct 02 19:08:13 compute-0 multipathd[177054]: 3254.090527 | --------start up--------
Oct 02 19:08:13 compute-0 multipathd[177054]: 3254.090554 | read /etc/multipath.conf
Oct 02 19:08:13 compute-0 multipathd[177054]: 3254.097179 | path checkers start up
Oct 02 19:08:14 compute-0 python3.9[177242]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:08:14 compute-0 sudo[177394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btmodnfhirxicmevqjsyiyoiuwavzndl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432094.2675772-874-10155498565007/AnsiballZ_command.py'
Oct 02 19:08:14 compute-0 sudo[177394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:14 compute-0 python3.9[177396]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:08:14 compute-0 sudo[177394]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:15 compute-0 sudo[177559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrybyvynxinxobgkuzpldczsgulctlit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432095.096838-882-209528954426469/AnsiballZ_systemd.py'
Oct 02 19:08:15 compute-0 sudo[177559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:15 compute-0 python3.9[177561]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:08:15 compute-0 systemd[1]: Stopping multipathd container...
Oct 02 19:08:15 compute-0 multipathd[177054]: 3256.592812 | exit (signal)
Oct 02 19:08:15 compute-0 multipathd[177054]: 3256.592910 | --------shut down-------
Oct 02 19:08:15 compute-0 systemd[1]: libpod-d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c.scope: Deactivated successfully.
Oct 02 19:08:15 compute-0 podman[177565]: 2025-10-02 19:08:15.954463801 +0000 UTC m=+0.088834716 container died d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:08:15 compute-0 systemd[1]: d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c-221279ec92fefef7.timer: Deactivated successfully.
Oct 02 19:08:15 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c.
Oct 02 19:08:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c-userdata-shm.mount: Deactivated successfully.
Oct 02 19:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e12768173cc8f15511cc16a26b5166b739b5831581e290fbc6d48eb7cb82faf2-merged.mount: Deactivated successfully.
Oct 02 19:08:16 compute-0 podman[177565]: 2025-10-02 19:08:16.013297573 +0000 UTC m=+0.147668478 container cleanup d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:08:16 compute-0 podman[177565]: multipathd
Oct 02 19:08:16 compute-0 podman[177594]: multipathd
Oct 02 19:08:16 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 02 19:08:16 compute-0 systemd[1]: Stopped multipathd container.
Oct 02 19:08:16 compute-0 systemd[1]: Starting multipathd container...
Oct 02 19:08:16 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12768173cc8f15511cc16a26b5166b739b5831581e290fbc6d48eb7cb82faf2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12768173cc8f15511cc16a26b5166b739b5831581e290fbc6d48eb7cb82faf2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 19:08:16 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c.
Oct 02 19:08:16 compute-0 podman[177607]: 2025-10-02 19:08:16.331142058 +0000 UTC m=+0.175625745 container init d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 19:08:16 compute-0 multipathd[177622]: + sudo -E kolla_set_configs
Oct 02 19:08:16 compute-0 podman[177607]: 2025-10-02 19:08:16.368020394 +0000 UTC m=+0.212504001 container start d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:08:16 compute-0 sudo[177628]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:08:16 compute-0 podman[177607]: multipathd
Oct 02 19:08:16 compute-0 sudo[177628]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:08:16 compute-0 sudo[177628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 19:08:16 compute-0 systemd[1]: Started multipathd container.
Oct 02 19:08:16 compute-0 sudo[177559]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:16 compute-0 multipathd[177622]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:08:16 compute-0 multipathd[177622]: INFO:__main__:Validating config file
Oct 02 19:08:16 compute-0 multipathd[177622]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:08:16 compute-0 multipathd[177622]: INFO:__main__:Writing out command to execute
Oct 02 19:08:16 compute-0 sudo[177628]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:16 compute-0 multipathd[177622]: ++ cat /run_command
Oct 02 19:08:16 compute-0 multipathd[177622]: + CMD='/usr/sbin/multipathd -d'
Oct 02 19:08:16 compute-0 multipathd[177622]: + ARGS=
Oct 02 19:08:16 compute-0 multipathd[177622]: + sudo kolla_copy_cacerts
Oct 02 19:08:16 compute-0 podman[177629]: 2025-10-02 19:08:16.478734563 +0000 UTC m=+0.092305368 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:08:16 compute-0 sudo[177649]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:08:16 compute-0 sudo[177649]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:08:16 compute-0 sudo[177649]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 19:08:16 compute-0 systemd[1]: d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c-2a596aac72a62b46.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:08:16 compute-0 systemd[1]: d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c-2a596aac72a62b46.service: Failed with result 'exit-code'.
Oct 02 19:08:16 compute-0 sudo[177649]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:16 compute-0 multipathd[177622]: + [[ ! -n '' ]]
Oct 02 19:08:16 compute-0 multipathd[177622]: + . kolla_extend_start
Oct 02 19:08:16 compute-0 multipathd[177622]: Running command: '/usr/sbin/multipathd -d'
Oct 02 19:08:16 compute-0 multipathd[177622]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 02 19:08:16 compute-0 multipathd[177622]: + umask 0022
Oct 02 19:08:16 compute-0 multipathd[177622]: + exec /usr/sbin/multipathd -d
Oct 02 19:08:16 compute-0 multipathd[177622]: 3257.205567 | --------start up--------
Oct 02 19:08:16 compute-0 multipathd[177622]: 3257.205594 | read /etc/multipath.conf
Oct 02 19:08:16 compute-0 multipathd[177622]: 3257.215199 | path checkers start up
Oct 02 19:08:17 compute-0 sudo[177810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfbvzplgzovrqokywygasvvxygzpkfqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432096.6624744-890-33049450769507/AnsiballZ_file.py'
Oct 02 19:08:17 compute-0 sudo[177810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:17 compute-0 python3.9[177812]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:17 compute-0 sudo[177810]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:18 compute-0 sudo[177962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcnrbkkcsaukzjtmnqqswwadtubkdhnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432097.7502851-902-18290392889269/AnsiballZ_file.py'
Oct 02 19:08:18 compute-0 sudo[177962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:18 compute-0 python3.9[177964]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 19:08:18 compute-0 sudo[177962]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:19 compute-0 sudo[178114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfobjunwselfwuiyhxouddsmnsapwmsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432098.6711638-910-173323660410435/AnsiballZ_modprobe.py'
Oct 02 19:08:19 compute-0 sudo[178114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:19 compute-0 python3.9[178116]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 02 19:08:19 compute-0 kernel: Key type psk registered
Oct 02 19:08:19 compute-0 sudo[178114]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:19 compute-0 podman[178201]: 2025-10-02 19:08:19.697902427 +0000 UTC m=+0.068791570 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct 02 19:08:19 compute-0 sudo[178293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtzsrocxffvoqqdokpqhzonpjqdvodst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432099.5402417-918-123127541487299/AnsiballZ_stat.py'
Oct 02 19:08:19 compute-0 sudo[178293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:20 compute-0 python3.9[178295]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:08:20 compute-0 sudo[178293]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:20 compute-0 sudo[178416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pekxcmirdjoznmsmsdmmeibpcvkyqzel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432099.5402417-918-123127541487299/AnsiballZ_copy.py'
Oct 02 19:08:20 compute-0 sudo[178416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:20 compute-0 python3.9[178418]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432099.5402417-918-123127541487299/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:20 compute-0 sudo[178416]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:21 compute-0 sudo[178568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqhktnrugxjhudzmdaqznmcwlgwazlex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432101.012425-934-102441091473364/AnsiballZ_lineinfile.py'
Oct 02 19:08:21 compute-0 sudo[178568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:21 compute-0 python3.9[178570]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:21 compute-0 sudo[178568]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:22 compute-0 sudo[178720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugqlcewftbarkhypnztzqtubcblakizl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432101.821167-942-132948715365801/AnsiballZ_systemd.py'
Oct 02 19:08:22 compute-0 sudo[178720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:22 compute-0 python3.9[178722]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:08:22 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 02 19:08:22 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 02 19:08:22 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 02 19:08:22 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 02 19:08:22 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 02 19:08:22 compute-0 sudo[178720]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:23 compute-0 podman[178850]: 2025-10-02 19:08:23.639079138 +0000 UTC m=+0.057734954 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 19:08:23 compute-0 sudo[178902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elfizafircplwcfqoptdnuocxowkpshh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432103.1158683-950-18767565778634/AnsiballZ_setup.py'
Oct 02 19:08:23 compute-0 sudo[178902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:23 compute-0 podman[178852]: 2025-10-02 19:08:23.694505399 +0000 UTC m=+0.105916852 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:08:23 compute-0 python3.9[178914]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:08:24 compute-0 sudo[178902]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:24 compute-0 sudo[179003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pylwbbyztypbnqyfsgoaaqnwbuqrhbvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432103.1158683-950-18767565778634/AnsiballZ_dnf.py'
Oct 02 19:08:24 compute-0 sudo[179003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:24 compute-0 python3.9[179005]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:08:31 compute-0 systemd[1]: Reloading.
Oct 02 19:08:31 compute-0 systemd-sysv-generator[179043]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:08:31 compute-0 systemd-rc-local-generator[179039]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:08:31 compute-0 systemd[1]: Reloading.
Oct 02 19:08:31 compute-0 systemd-rc-local-generator[179071]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:08:31 compute-0 systemd-sysv-generator[179076]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:08:31 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 02 19:08:31 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 02 19:08:32 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 19:08:32 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 02 19:08:32 compute-0 systemd[1]: Reloading.
Oct 02 19:08:32 compute-0 systemd-rc-local-generator[179169]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:08:32 compute-0 systemd-sysv-generator[179173]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:08:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 19:08:33 compute-0 sudo[179003]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 19:08:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 02 19:08:33 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.786s CPU time.
Oct 02 19:08:33 compute-0 systemd[1]: run-rd27a6085c6e44e3dbe8120d742ca498a.service: Deactivated successfully.
Oct 02 19:08:34 compute-0 sudo[180455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtonhxzlyfrtambiphgblgygqwrgczyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432113.6408355-962-274216582273619/AnsiballZ_file.py'
Oct 02 19:08:34 compute-0 sudo[180455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:34 compute-0 python3.9[180457]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:34 compute-0 sudo[180455]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:35 compute-0 python3.9[180607]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:08:36 compute-0 sudo[180761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvqwacvmtsptdhhocwvgjfyfuysmhdqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432115.9063106-980-87120510377744/AnsiballZ_file.py'
Oct 02 19:08:36 compute-0 sudo[180761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:36 compute-0 python3.9[180763]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:36 compute-0 sudo[180761]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:37 compute-0 sudo[180913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohytxrnfdkeatbhgdukiqnwxgbjvpaop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432117.0123215-991-234595658759701/AnsiballZ_systemd_service.py'
Oct 02 19:08:37 compute-0 sudo[180913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:38 compute-0 python3.9[180915]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:08:38 compute-0 systemd[1]: Reloading.
Oct 02 19:08:38 compute-0 systemd-sysv-generator[180947]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:08:38 compute-0 systemd-rc-local-generator[180942]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:08:38 compute-0 sudo[180913]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:39 compute-0 python3.9[181100]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:08:39 compute-0 network[181117]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:08:39 compute-0 network[181118]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:08:39 compute-0 network[181119]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:08:46 compute-0 podman[181269]: 2025-10-02 19:08:46.743589798 +0000 UTC m=+0.102693046 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:08:47 compute-0 sudo[181415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swryoufgnvfonxndzhwjwlitbewgsmaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432126.8833468-1010-274441989788180/AnsiballZ_systemd_service.py'
Oct 02 19:08:47 compute-0 sudo[181415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:08:47.441 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:08:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:08:47.443 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:08:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:08:47.443 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:08:47 compute-0 python3.9[181417]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:08:47 compute-0 sudo[181415]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:48 compute-0 sudo[181568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmcdsgmwvvkzkdelrxdncbmttjhlhytg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432127.8711863-1010-230362509547602/AnsiballZ_systemd_service.py'
Oct 02 19:08:48 compute-0 sudo[181568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:48 compute-0 python3.9[181570]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:08:48 compute-0 sudo[181568]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:49 compute-0 sudo[181721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hykqwlyzaxkobjskbzaqqmiknekddzrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432128.787162-1010-201376558934754/AnsiballZ_systemd_service.py'
Oct 02 19:08:49 compute-0 sudo[181721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:49 compute-0 python3.9[181723]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:08:49 compute-0 sudo[181721]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:50 compute-0 sudo[181884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfegrdoeogblmwqgaighitqplfshswgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432129.688583-1010-265650664844742/AnsiballZ_systemd_service.py'
Oct 02 19:08:50 compute-0 sudo[181884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:50 compute-0 podman[181848]: 2025-10-02 19:08:50.077845447 +0000 UTC m=+0.070236895 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:08:50 compute-0 python3.9[181890]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:08:50 compute-0 sudo[181884]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:50 compute-0 sudo[182048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cntqjklcoxzsxuyeypkgmmkzzwotptyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432130.5583668-1010-139879351891318/AnsiballZ_systemd_service.py'
Oct 02 19:08:50 compute-0 sudo[182048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:51 compute-0 python3.9[182050]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:08:51 compute-0 sudo[182048]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:51 compute-0 sudo[182201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwpvpqqkbeydomntdtuigfwivxlonufl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432131.3812118-1010-57505490833213/AnsiballZ_systemd_service.py'
Oct 02 19:08:51 compute-0 sudo[182201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:52 compute-0 python3.9[182203]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:08:52 compute-0 sudo[182201]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:52 compute-0 sudo[182354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miwgcnvlgmsflohcgvmelbihuuknnoss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432132.232517-1010-246257471095236/AnsiballZ_systemd_service.py'
Oct 02 19:08:52 compute-0 sudo[182354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:52 compute-0 python3.9[182356]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:08:52 compute-0 sudo[182354]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:53 compute-0 sudo[182507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzqwbehxbstyaymhuqtstiwapykxcopz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432132.986491-1010-179560319404605/AnsiballZ_systemd_service.py'
Oct 02 19:08:53 compute-0 sudo[182507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:53 compute-0 python3.9[182509]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:08:53 compute-0 sudo[182507]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:54 compute-0 sudo[182689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eopaayjvtvqsxjfqzblvekmlsgqslbky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432133.9253917-1069-220235898437438/AnsiballZ_file.py'
Oct 02 19:08:54 compute-0 sudo[182689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:54 compute-0 podman[182634]: 2025-10-02 19:08:54.28701298 +0000 UTC m=+0.068499989 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 19:08:54 compute-0 podman[182635]: 2025-10-02 19:08:54.325224725 +0000 UTC m=+0.100651231 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 19:08:54 compute-0 python3.9[182696]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:54 compute-0 sudo[182689]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:55 compute-0 sudo[182857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuudkftjnwpgjoepjrxwaqwzghdlhxnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432134.6615968-1069-32067974501860/AnsiballZ_file.py'
Oct 02 19:08:55 compute-0 sudo[182857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:55 compute-0 python3.9[182859]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:55 compute-0 sudo[182857]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:55 compute-0 sudo[183009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndhettlabykmvhxihpueidsnkejffbqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432135.3913152-1069-34008292947959/AnsiballZ_file.py'
Oct 02 19:08:55 compute-0 sudo[183009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:55 compute-0 python3.9[183011]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:55 compute-0 sudo[183009]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:56 compute-0 sudo[183161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyilbacbjkkmhvgrnqgdzemmztoroccf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432136.0771186-1069-48046245339828/AnsiballZ_file.py'
Oct 02 19:08:56 compute-0 sudo[183161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:56 compute-0 python3.9[183163]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:56 compute-0 sudo[183161]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:57 compute-0 sudo[183313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxgzdklkxslcrbxnlvswvulfiwamvann ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432136.7854333-1069-92860831095212/AnsiballZ_file.py'
Oct 02 19:08:57 compute-0 sudo[183313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:57 compute-0 python3.9[183315]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:57 compute-0 sudo[183313]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:57 compute-0 sudo[183465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wczkkkbumnpbqnnxchqewuogkwispvfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432137.4994335-1069-3447806717691/AnsiballZ_file.py'
Oct 02 19:08:57 compute-0 sudo[183465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:57 compute-0 python3.9[183467]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:58 compute-0 sudo[183465]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:58 compute-0 sudo[183617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whjfrjemkuwvtxthfekjqpnylrkrovao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432138.13429-1069-258637375070081/AnsiballZ_file.py'
Oct 02 19:08:58 compute-0 sudo[183617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:58 compute-0 python3.9[183619]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:58 compute-0 sudo[183617]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:59 compute-0 sudo[183769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hznnnhpihygpjffuwpckokgvygmgqait ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432138.8815374-1069-54524018730458/AnsiballZ_file.py'
Oct 02 19:08:59 compute-0 sudo[183769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:08:59 compute-0 python3.9[183771]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:08:59 compute-0 sudo[183769]: pam_unix(sudo:session): session closed for user root
Oct 02 19:08:59 compute-0 sudo[183921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmzfwklhvuhqggcklquzrvmoajfaauaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432139.642999-1126-201256389206174/AnsiballZ_file.py'
Oct 02 19:08:59 compute-0 sudo[183921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:00 compute-0 python3.9[183923]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:09:00 compute-0 sudo[183921]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:00 compute-0 sudo[184073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vutyxstuoqhrdbpfmwtlkityedeuvpnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432140.3678474-1126-145595754150819/AnsiballZ_file.py'
Oct 02 19:09:00 compute-0 sudo[184073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:00 compute-0 python3.9[184075]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:09:00 compute-0 sudo[184073]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:01 compute-0 sudo[184225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlzjscfjkppvegpywscxcycmcrpxsrlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432141.0571077-1126-112224945619181/AnsiballZ_file.py'
Oct 02 19:09:01 compute-0 sudo[184225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:01 compute-0 python3.9[184227]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:09:01 compute-0 sudo[184225]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:02 compute-0 sudo[184377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofdawvbfmurtfczzcgyafyknblmcllrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432141.7225623-1126-187144338048221/AnsiballZ_file.py'
Oct 02 19:09:02 compute-0 sudo[184377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:02 compute-0 python3.9[184379]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:09:02 compute-0 sudo[184377]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:02 compute-0 sudo[184529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsgptolxxyvpfzopdeamgnfwvudfqfrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432142.3886595-1126-86387049598103/AnsiballZ_file.py'
Oct 02 19:09:02 compute-0 sudo[184529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:02 compute-0 python3.9[184531]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:09:02 compute-0 sudo[184529]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:03 compute-0 sudo[184681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqrslouafmzvjcopdmskywqrepqwkyht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432143.0612087-1126-223717780099507/AnsiballZ_file.py'
Oct 02 19:09:03 compute-0 sudo[184681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:03 compute-0 python3.9[184683]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:09:03 compute-0 sudo[184681]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:04 compute-0 sudo[184833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgdjoknavsxdfkcbltircvxsicrtdfub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432143.803097-1126-228717901077295/AnsiballZ_file.py'
Oct 02 19:09:04 compute-0 sudo[184833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:04 compute-0 python3.9[184835]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:09:04 compute-0 sudo[184833]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:04 compute-0 sudo[184985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhuufxtwwcukktoinaytuwzafjhvwyki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432144.3493052-1126-13413617500749/AnsiballZ_file.py'
Oct 02 19:09:04 compute-0 sudo[184985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:04 compute-0 python3.9[184987]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:09:04 compute-0 sudo[184985]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:05 compute-0 sudo[185137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luwnnmtkdfvmsyjljgkghjdvdzkgfcck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432145.088091-1184-233639561419936/AnsiballZ_command.py'
Oct 02 19:09:05 compute-0 sudo[185137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:05 compute-0 python3.9[185139]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:05 compute-0 sudo[185137]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:06 compute-0 python3.9[185291]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:09:07 compute-0 sudo[185441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tilbmnkbtnzztrfccbrrnbsfrvkyihah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432146.8442898-1202-258717698890913/AnsiballZ_systemd_service.py'
Oct 02 19:09:07 compute-0 sudo[185441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:07 compute-0 python3.9[185443]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:09:07 compute-0 systemd[1]: Reloading.
Oct 02 19:09:07 compute-0 systemd-rc-local-generator[185469]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:09:07 compute-0 systemd-sysv-generator[185473]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:09:07 compute-0 sudo[185441]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:08 compute-0 sudo[185627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omuvdmpkniakanjotfxqdrltcoyjwqpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432148.0561886-1210-187826338316018/AnsiballZ_command.py'
Oct 02 19:09:08 compute-0 sudo[185627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:08 compute-0 python3.9[185629]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:08 compute-0 sudo[185627]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:09 compute-0 sudo[185780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tktqtmrihlmoyfnyfsmryregqybvgofz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432148.7685058-1210-113729244118126/AnsiballZ_command.py'
Oct 02 19:09:09 compute-0 sudo[185780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:09 compute-0 python3.9[185782]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:09 compute-0 sudo[185780]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:09 compute-0 sudo[185933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gshirgirklfvenyohlvdkqmnqvilpemz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432149.5550501-1210-263709089450692/AnsiballZ_command.py'
Oct 02 19:09:09 compute-0 sudo[185933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:10 compute-0 python3.9[185935]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:10 compute-0 sudo[185933]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:10 compute-0 sudo[186086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brkridiybqcadzpcdcjdvyucibnycvav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432150.2596138-1210-163687270574683/AnsiballZ_command.py'
Oct 02 19:09:10 compute-0 sudo[186086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:10 compute-0 python3.9[186088]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:10 compute-0 sudo[186086]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:11 compute-0 sudo[186239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icdffgtftwkyrdtpvkrotfjjqmujgdal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432150.981476-1210-64710731592163/AnsiballZ_command.py'
Oct 02 19:09:11 compute-0 sudo[186239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:11 compute-0 python3.9[186241]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:11 compute-0 sudo[186239]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:11 compute-0 sudo[186392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vajfnazetcivgkwvxbktwjsrpovknxcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432151.6931012-1210-136090092467424/AnsiballZ_command.py'
Oct 02 19:09:11 compute-0 sudo[186392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:12 compute-0 python3.9[186394]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:12 compute-0 sudo[186392]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:12 compute-0 sudo[186545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlgtbvpgtlxkmrrgcdznlejjixskoeoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432152.3496406-1210-132562808726856/AnsiballZ_command.py'
Oct 02 19:09:12 compute-0 sudo[186545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:12 compute-0 python3.9[186547]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:12 compute-0 sudo[186545]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:13 compute-0 sudo[186698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xejobnbkcyfnpqqbyaecitauozcecbjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432153.0472312-1210-276150712706011/AnsiballZ_command.py'
Oct 02 19:09:13 compute-0 sudo[186698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:13 compute-0 python3.9[186700]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:09:13 compute-0 sudo[186698]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:14 compute-0 sudo[186851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkjphczsxlogfkbqcmpxaizjtfewiiwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432154.5604742-1289-56483188125383/AnsiballZ_file.py'
Oct 02 19:09:14 compute-0 sudo[186851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:15 compute-0 python3.9[186853]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:15 compute-0 sudo[186851]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:15 compute-0 sudo[187003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rknuqjyofyxepdqsffpnpbfqkqupqrfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432155.184218-1289-111635407788338/AnsiballZ_file.py'
Oct 02 19:09:15 compute-0 sudo[187003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:15 compute-0 python3.9[187005]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:15 compute-0 sudo[187003]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:16 compute-0 sudo[187155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muixmyxwnzzaztdcwbkdsqdkyenfbpre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432155.8625374-1289-249394326048624/AnsiballZ_file.py'
Oct 02 19:09:16 compute-0 sudo[187155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:16 compute-0 python3.9[187157]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:16 compute-0 sudo[187155]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:16 compute-0 sudo[187316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azqaiqrndjuoiznwsiaedtodnulljjfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432156.5629187-1311-263518676981761/AnsiballZ_file.py'
Oct 02 19:09:16 compute-0 sudo[187316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:16 compute-0 podman[187281]: 2025-10-02 19:09:16.923728981 +0000 UTC m=+0.094963398 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct 02 19:09:17 compute-0 python3.9[187324]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:17 compute-0 sudo[187316]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:17 compute-0 sudo[187479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaezszxjojducisqkrcnkqylysmzowfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432157.2815828-1311-107311789909486/AnsiballZ_file.py'
Oct 02 19:09:17 compute-0 sudo[187479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:17 compute-0 python3.9[187481]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:17 compute-0 sudo[187479]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:18 compute-0 sudo[187631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grsksghqeajtfbhiibzfizesqhnsywqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432158.0019774-1311-248931750458509/AnsiballZ_file.py'
Oct 02 19:09:18 compute-0 sudo[187631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:18 compute-0 python3.9[187633]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:18 compute-0 sudo[187631]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:18 compute-0 sudo[187783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzracqvfhmubbqoiqliaebqpwcgdbdaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432158.7051795-1311-255668655366164/AnsiballZ_file.py'
Oct 02 19:09:19 compute-0 sudo[187783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:19 compute-0 python3.9[187785]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:19 compute-0 sudo[187783]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:19 compute-0 sudo[187935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwvotlbzmskwacxqlscvjeufhclkhwyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432159.4346747-1311-17381016249683/AnsiballZ_file.py'
Oct 02 19:09:19 compute-0 sudo[187935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:20 compute-0 python3.9[187937]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:20 compute-0 sudo[187935]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:20 compute-0 sudo[188098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pecpgtjnthpfefgijdrjgjdweyboqfnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432160.2369792-1311-210485184868661/AnsiballZ_file.py'
Oct 02 19:09:20 compute-0 sudo[188098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:20 compute-0 podman[188061]: 2025-10-02 19:09:20.686193865 +0000 UTC m=+0.090246881 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 19:09:20 compute-0 python3.9[188105]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:20 compute-0 sudo[188098]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:21 compute-0 sudo[188259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efgdmaobvpolbzzebxyqtzsumyysqjiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432161.0694776-1311-183606159308296/AnsiballZ_file.py'
Oct 02 19:09:21 compute-0 sudo[188259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:21 compute-0 python3.9[188261]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:21 compute-0 sudo[188259]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:22 compute-0 sudo[188411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jivmcvzybppwmtbwhdcjoonjnchlzkbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432161.8850496-1311-43436285762728/AnsiballZ_file.py'
Oct 02 19:09:22 compute-0 sudo[188411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:22 compute-0 python3.9[188413]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:22 compute-0 sudo[188411]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:22 compute-0 sudo[188563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgfdcfkcavyffiorwskxdtkowpryhecr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432162.5729084-1311-218993159393316/AnsiballZ_file.py'
Oct 02 19:09:22 compute-0 sudo[188563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:23 compute-0 python3.9[188565]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:23 compute-0 sudo[188563]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:24 compute-0 podman[188591]: 2025-10-02 19:09:24.732017009 +0000 UTC m=+0.090644722 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:09:24 compute-0 podman[188590]: 2025-10-02 19:09:24.73204597 +0000 UTC m=+0.094891596 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 19:09:27 compute-0 sudo[188757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqfqxmddjadwxogdjkeyibzvxcrntsbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432167.2839918-1494-1324867597396/AnsiballZ_getent.py'
Oct 02 19:09:27 compute-0 sudo[188757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:28 compute-0 python3.9[188759]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 02 19:09:28 compute-0 sudo[188757]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:28 compute-0 sudo[188910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhsiysxnkhfvfcejptoszttwhoeokdud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432168.3310285-1502-168794644710106/AnsiballZ_group.py'
Oct 02 19:09:28 compute-0 sudo[188910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:29 compute-0 python3.9[188912]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 19:09:29 compute-0 groupadd[188913]: group added to /etc/group: name=nova, GID=42436
Oct 02 19:09:29 compute-0 groupadd[188913]: group added to /etc/gshadow: name=nova
Oct 02 19:09:29 compute-0 groupadd[188913]: new group: name=nova, GID=42436
Oct 02 19:09:29 compute-0 sudo[188910]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:29 compute-0 sudo[189068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmgtqkolxrbtuflxectnzmieyoqupsmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432169.3906186-1510-134768789359781/AnsiballZ_user.py'
Oct 02 19:09:29 compute-0 sudo[189068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:30 compute-0 python3.9[189070]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 19:09:30 compute-0 useradd[189072]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 02 19:09:30 compute-0 useradd[189072]: add 'nova' to group 'libvirt'
Oct 02 19:09:30 compute-0 useradd[189072]: add 'nova' to shadow group 'libvirt'
Oct 02 19:09:30 compute-0 sudo[189068]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:31 compute-0 sshd-session[189103]: Accepted publickey for zuul from 192.168.122.30 port 34344 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 19:09:31 compute-0 systemd-logind[798]: New session 26 of user zuul.
Oct 02 19:09:31 compute-0 systemd[1]: Started Session 26 of User zuul.
Oct 02 19:09:31 compute-0 sshd-session[189103]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:09:31 compute-0 sshd-session[189106]: Received disconnect from 192.168.122.30 port 34344:11: disconnected by user
Oct 02 19:09:31 compute-0 sshd-session[189106]: Disconnected from user zuul 192.168.122.30 port 34344
Oct 02 19:09:31 compute-0 sshd-session[189103]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:09:31 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Oct 02 19:09:31 compute-0 systemd-logind[798]: Session 26 logged out. Waiting for processes to exit.
Oct 02 19:09:31 compute-0 systemd-logind[798]: Removed session 26.
Oct 02 19:09:31 compute-0 python3.9[189256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:09:32 compute-0 python3.9[189377]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432171.4320295-1535-206787975836004/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:33 compute-0 python3.9[189527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:09:33 compute-0 python3.9[189603]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:34 compute-0 python3.9[189753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:09:34 compute-0 python3.9[189874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432173.683735-1535-225102262409730/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:35 compute-0 python3.9[190024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:09:36 compute-0 python3.9[190145]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432174.9161472-1535-145429735098837/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:36 compute-0 python3.9[190295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:09:37 compute-0 python3.9[190416]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432176.2193258-1535-16925694696002/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:37 compute-0 sudo[190566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiotjfrnsehqacudglyydguydlvwirqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432177.5868077-1604-222997647927250/AnsiballZ_file.py'
Oct 02 19:09:37 compute-0 sudo[190566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:38 compute-0 python3.9[190568]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:09:38 compute-0 sudo[190566]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:38 compute-0 sudo[190718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrjszicaaxejpjrxoqmvxxqweumyecfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432178.39248-1612-29414515540637/AnsiballZ_copy.py'
Oct 02 19:09:38 compute-0 sudo[190718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:38 compute-0 python3.9[190720]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:09:38 compute-0 sudo[190718]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:39 compute-0 sudo[190870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oirkpwovbfudswrmvtstbdgopbyvxkzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432179.1145813-1620-6249697625999/AnsiballZ_stat.py'
Oct 02 19:09:39 compute-0 sudo[190870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:39 compute-0 python3.9[190872]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:09:39 compute-0 sudo[190870]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:40 compute-0 sudo[191022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iclvttnzoecwswpnmyihyqyizavuwcth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432179.8571582-1628-115237182235403/AnsiballZ_stat.py'
Oct 02 19:09:40 compute-0 sudo[191022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:40 compute-0 python3.9[191024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:09:40 compute-0 sudo[191022]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:40 compute-0 sudo[191145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myjhskorsvgeoohljdsowjzirdcjxayu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432179.8571582-1628-115237182235403/AnsiballZ_copy.py'
Oct 02 19:09:40 compute-0 sudo[191145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:40 compute-0 python3.9[191147]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759432179.8571582-1628-115237182235403/.source _original_basename=.l2stve6w follow=False checksum=e8fd6d54964c8a68110cd426c12e7ffe902f135d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 02 19:09:40 compute-0 sudo[191145]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:41 compute-0 python3.9[191299]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:09:42 compute-0 python3.9[191451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:09:43 compute-0 python3.9[191572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432182.017578-1654-218775801910713/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:43 compute-0 python3.9[191722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:09:44 compute-0 python3.9[191843]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432183.4245632-1669-269893878127871/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:09:45 compute-0 sudo[191993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srxbhtkurgwuihtbvdnubjwothqpjmyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432184.8797092-1686-119061261746362/AnsiballZ_container_config_data.py'
Oct 02 19:09:45 compute-0 sudo[191993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:45 compute-0 python3.9[191995]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 02 19:09:45 compute-0 sudo[191993]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:46 compute-0 sudo[192145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjxqchbauhtyxkxppgpkzboxzhyyxvtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432185.7222307-1695-74966134808286/AnsiballZ_container_config_hash.py'
Oct 02 19:09:46 compute-0 sudo[192145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:46 compute-0 python3.9[192147]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:09:46 compute-0 sudo[192145]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:46 compute-0 sudo[192297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsmhbnbimvlazkzjgqhbakalpcdeipbh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432186.6080804-1705-163890937304182/AnsiballZ_edpm_container_manage.py'
Oct 02 19:09:46 compute-0 sudo[192297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:47 compute-0 podman[192299]: 2025-10-02 19:09:47.067278504 +0000 UTC m=+0.077804038 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 19:09:47 compute-0 python3[192300]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:09:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:09:47.442 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:09:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:09:47.444 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:09:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:09:47.444 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:09:47 compute-0 podman[192356]: 2025-10-02 19:09:47.538326007 +0000 UTC m=+0.060324989 container create 51c7610a0131de1adff145c4c0e5b9949f99d5681392fac06cf03e6c3a83d292 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Oct 02 19:09:47 compute-0 podman[192356]: 2025-10-02 19:09:47.506075652 +0000 UTC m=+0.028074634 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 19:09:47 compute-0 python3[192300]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 02 19:09:47 compute-0 sudo[192297]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:48 compute-0 sudo[192544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blbeifwhuipsgxscendxtxzrfyemkpuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432187.906431-1713-90300300681477/AnsiballZ_stat.py'
Oct 02 19:09:48 compute-0 sudo[192544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:48 compute-0 python3.9[192546]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:09:48 compute-0 sudo[192544]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:49 compute-0 sudo[192698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giytkrzvovibecnrzoemaimvxdoglulh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432189.087754-1725-11107049510847/AnsiballZ_container_config_data.py'
Oct 02 19:09:49 compute-0 sudo[192698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:49 compute-0 python3.9[192700]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 02 19:09:49 compute-0 sudo[192698]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:50 compute-0 sudo[192850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyzzyqlxevhgjxinhvskdmaenqyrbulb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432189.9857998-1734-46994877327506/AnsiballZ_container_config_hash.py'
Oct 02 19:09:50 compute-0 sudo[192850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:50 compute-0 python3.9[192852]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:09:50 compute-0 sudo[192850]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:51 compute-0 sudo[193013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcfaoeoigqrogtqvcwqzmattpwkzpfiy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432190.948052-1744-222511903105373/AnsiballZ_edpm_container_manage.py'
Oct 02 19:09:51 compute-0 sudo[193013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:51 compute-0 podman[192976]: 2025-10-02 19:09:51.395207733 +0000 UTC m=+0.080737966 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:09:51 compute-0 python3[193021]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:09:51 compute-0 podman[193056]: 2025-10-02 19:09:51.829332177 +0000 UTC m=+0.051178944 container create ce1835eed7d311b8e692dd5ac3b82871f3aef0cff9abee9e6775d6b93e52b1ea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Oct 02 19:09:51 compute-0 podman[193056]: 2025-10-02 19:09:51.805992271 +0000 UTC m=+0.027839058 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 19:09:51 compute-0 python3[193021]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct 02 19:09:51 compute-0 sudo[193013]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:52 compute-0 sudo[193244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abhmwkidwqlkaxlqwdjnpmiwcukicxtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432192.1417694-1752-101828541637375/AnsiballZ_stat.py'
Oct 02 19:09:52 compute-0 sudo[193244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:52 compute-0 python3.9[193246]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:09:52 compute-0 sudo[193244]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:53 compute-0 sudo[193398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmgyixfqnnanugkljklrndbbjklzgnac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432192.9379468-1761-114633473859895/AnsiballZ_file.py'
Oct 02 19:09:53 compute-0 sudo[193398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:53 compute-0 python3.9[193400]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:09:53 compute-0 sudo[193398]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:54 compute-0 sudo[193549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqdpnsraixgpolmcrkrcnvtombwsowyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432193.5556307-1761-269480273466385/AnsiballZ_copy.py'
Oct 02 19:09:54 compute-0 sudo[193549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:54 compute-0 python3.9[193551]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432193.5556307-1761-269480273466385/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:09:54 compute-0 sudo[193549]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:54 compute-0 sudo[193625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnntecyxbeqzyyroqozoxzarfebxklqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432193.5556307-1761-269480273466385/AnsiballZ_systemd.py'
Oct 02 19:09:54 compute-0 sudo[193625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:54 compute-0 python3.9[193627]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:09:54 compute-0 systemd[1]: Reloading.
Oct 02 19:09:55 compute-0 podman[193629]: 2025-10-02 19:09:55.075001792 +0000 UTC m=+0.065562372 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:09:55 compute-0 systemd-rc-local-generator[193695]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:09:55 compute-0 systemd-sysv-generator[193701]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:09:55 compute-0 podman[193630]: 2025-10-02 19:09:55.12595278 +0000 UTC m=+0.121803888 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:09:55 compute-0 sudo[193625]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:55 compute-0 sudo[193782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijznlvqvsorhfmcvinfnrwtznjxeunwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432193.5556307-1761-269480273466385/AnsiballZ_systemd.py'
Oct 02 19:09:55 compute-0 sudo[193782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:56 compute-0 python3.9[193784]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:09:56 compute-0 systemd[1]: Reloading.
Oct 02 19:09:56 compute-0 systemd-rc-local-generator[193813]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:09:56 compute-0 systemd-sysv-generator[193817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:09:56 compute-0 systemd[1]: Starting nova_compute container...
Oct 02 19:09:56 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:09:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61282440c4581a4711513ce3c56f64afad4ceaa38ea9d6af3b12a928331df2aa/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61282440c4581a4711513ce3c56f64afad4ceaa38ea9d6af3b12a928331df2aa/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61282440c4581a4711513ce3c56f64afad4ceaa38ea9d6af3b12a928331df2aa/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61282440c4581a4711513ce3c56f64afad4ceaa38ea9d6af3b12a928331df2aa/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61282440c4581a4711513ce3c56f64afad4ceaa38ea9d6af3b12a928331df2aa/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 19:09:56 compute-0 podman[193823]: 2025-10-02 19:09:56.508401154 +0000 UTC m=+0.102932363 container init ce1835eed7d311b8e692dd5ac3b82871f3aef0cff9abee9e6775d6b93e52b1ea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 02 19:09:56 compute-0 podman[193823]: 2025-10-02 19:09:56.517517973 +0000 UTC m=+0.112049122 container start ce1835eed7d311b8e692dd5ac3b82871f3aef0cff9abee9e6775d6b93e52b1ea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm)
Oct 02 19:09:56 compute-0 podman[193823]: nova_compute
Oct 02 19:09:56 compute-0 nova_compute[193839]: + sudo -E kolla_set_configs
Oct 02 19:09:56 compute-0 systemd[1]: Started nova_compute container.
Oct 02 19:09:56 compute-0 sudo[193782]: pam_unix(sudo:session): session closed for user root
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Validating config file
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Copying service configuration files
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Deleting /etc/ceph
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Creating directory /etc/ceph
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Setting permission for /etc/ceph
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Writing out command to execute
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:09:56 compute-0 nova_compute[193839]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 19:09:56 compute-0 nova_compute[193839]: ++ cat /run_command
Oct 02 19:09:56 compute-0 nova_compute[193839]: + CMD=nova-compute
Oct 02 19:09:56 compute-0 nova_compute[193839]: + ARGS=
Oct 02 19:09:56 compute-0 nova_compute[193839]: + sudo kolla_copy_cacerts
Oct 02 19:09:56 compute-0 nova_compute[193839]: + [[ ! -n '' ]]
Oct 02 19:09:56 compute-0 nova_compute[193839]: + . kolla_extend_start
Oct 02 19:09:56 compute-0 nova_compute[193839]: Running command: 'nova-compute'
Oct 02 19:09:56 compute-0 nova_compute[193839]: + echo 'Running command: '\''nova-compute'\'''
Oct 02 19:09:56 compute-0 nova_compute[193839]: + umask 0022
Oct 02 19:09:56 compute-0 nova_compute[193839]: + exec nova-compute
Oct 02 19:09:57 compute-0 python3.9[194001]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:09:58 compute-0 python3.9[194151]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:09:58 compute-0 nova_compute[193839]: 2025-10-02 19:09:58.498 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 19:09:58 compute-0 nova_compute[193839]: 2025-10-02 19:09:58.498 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 19:09:58 compute-0 nova_compute[193839]: 2025-10-02 19:09:58.499 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 19:09:58 compute-0 nova_compute[193839]: 2025-10-02 19:09:58.499 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 02 19:09:58 compute-0 nova_compute[193839]: 2025-10-02 19:09:58.630 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:09:58 compute-0 nova_compute[193839]: 2025-10-02 19:09:58.643 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:09:59 compute-0 python3.9[194305]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.274 2 INFO nova.virt.driver [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.372 2 INFO nova.compute.provider_config [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.387 2 DEBUG oslo_concurrency.lockutils [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.387 2 DEBUG oslo_concurrency.lockutils [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.387 2 DEBUG oslo_concurrency.lockutils [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.387 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.388 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.388 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.388 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.388 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.388 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.388 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.389 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.389 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.389 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.389 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.389 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.389 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.389 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.389 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.390 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.390 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.390 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.390 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.390 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.390 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.390 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.391 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.391 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.391 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.391 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.391 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.391 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.392 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.392 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.392 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.392 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.392 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.392 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.392 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.393 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.393 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.393 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.393 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.393 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.393 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.394 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.394 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.394 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.394 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.394 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.394 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.394 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.395 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.395 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.395 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.395 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.395 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.395 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.395 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.396 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.396 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.396 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.396 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.396 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.396 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.396 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.397 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.397 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.397 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.397 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.397 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.397 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.397 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.397 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.398 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.398 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.398 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.398 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.398 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.398 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.398 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.399 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.399 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.399 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.399 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.399 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.399 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.399 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.400 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.400 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.400 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.400 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.400 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.400 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.400 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.401 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.401 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.401 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.401 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.401 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.401 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.401 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.401 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.402 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.402 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.402 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.402 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.402 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.402 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.402 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.403 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.403 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.403 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.403 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.403 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.403 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.403 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.404 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.404 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.404 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.404 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.404 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.404 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.404 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.405 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.405 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.405 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.405 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.405 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.405 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.405 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.405 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.406 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.406 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.406 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.406 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.406 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.406 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.406 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.407 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.407 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.407 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.407 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.407 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.407 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.407 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.408 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.408 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.408 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.408 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.408 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.408 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.409 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.409 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.409 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.409 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.409 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.409 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.409 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.410 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.410 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.410 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.410 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.410 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.410 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.410 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.411 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.411 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.411 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.411 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.411 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.411 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.411 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.412 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.412 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.412 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.412 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.412 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.412 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.412 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.413 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.413 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.413 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.413 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.413 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.413 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.413 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.414 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.414 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.414 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.414 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.414 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.414 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.414 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.415 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.415 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.415 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.415 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.415 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.415 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.415 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.415 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.416 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.416 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.416 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.416 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.416 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.416 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.417 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.417 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.417 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.417 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.417 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.417 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.417 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.417 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.418 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.418 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.418 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.418 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.418 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.418 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.418 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.419 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.419 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.419 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.419 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.419 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.419 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.419 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.420 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.420 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.420 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.420 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.420 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.420 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.420 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.421 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.421 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.421 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.421 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.421 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.421 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.421 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.422 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.422 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.422 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.422 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.422 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.422 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.422 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.423 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.423 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.423 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.423 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.423 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.423 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.423 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.423 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.424 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.424 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.424 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.424 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.424 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.424 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.424 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.425 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.425 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.425 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.425 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.425 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.425 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.425 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.426 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.426 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.426 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.426 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.426 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.426 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.426 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.427 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.427 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.427 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.427 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.427 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.427 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.427 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.428 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.428 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.428 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.428 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.428 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.428 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.428 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.429 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.429 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.429 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.429 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.429 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.429 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.429 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.430 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.430 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.430 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.430 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.430 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.430 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.430 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.431 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.431 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.431 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.431 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.431 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.431 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.431 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.432 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.432 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.432 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.432 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.432 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.432 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.432 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.433 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.433 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.433 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.433 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.433 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.433 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.433 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.434 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.434 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.434 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.434 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.434 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.434 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.434 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.434 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.435 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.435 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.435 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.435 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.435 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.435 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.435 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.436 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.436 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.436 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.436 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.436 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.436 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.437 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.437 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.437 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.437 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.437 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.437 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.437 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.437 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.438 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.438 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.438 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.438 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.438 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.439 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.439 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.439 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.439 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.439 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.439 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.439 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.440 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.440 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.440 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.440 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.440 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.440 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.440 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.441 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.441 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.441 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.441 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.441 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.441 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.441 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.441 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.442 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.442 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.442 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.442 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.442 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.442 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.443 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.443 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.443 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.443 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.443 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.443 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.443 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.444 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.444 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.444 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.444 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.444 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.444 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.444 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.445 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.445 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.445 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.445 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.445 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.445 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.445 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.446 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.446 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.446 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.446 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.446 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.446 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.446 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.447 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.447 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.447 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.447 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.447 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.447 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.447 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.447 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.448 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.448 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.448 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.448 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.448 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.448 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.448 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.449 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.449 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.449 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.449 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.449 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.449 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.449 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.450 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.450 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.450 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.450 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.450 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.450 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.450 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.451 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.451 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.451 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.451 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.451 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.451 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.451 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.452 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.452 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.452 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.452 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.452 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.452 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.452 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.453 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.453 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.453 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.453 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.453 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.453 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.453 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.454 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.454 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.454 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.454 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.454 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.454 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.454 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.455 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.455 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.455 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.455 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.455 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.455 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.455 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.456 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.456 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.456 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.456 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.456 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.456 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.456 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.457 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.457 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.457 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.457 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.457 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.457 2 WARNING oslo_config.cfg [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 02 19:09:59 compute-0 nova_compute[193839]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 02 19:09:59 compute-0 nova_compute[193839]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 02 19:09:59 compute-0 nova_compute[193839]: and ``live_migration_inbound_addr`` respectively.
Oct 02 19:09:59 compute-0 nova_compute[193839]: ).  Its value may be silently ignored in the future.
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.458 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.458 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.458 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.458 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.458 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.458 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.459 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.459 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.459 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.459 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.459 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.459 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.459 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.459 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.460 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.460 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.460 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.460 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.460 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.460 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.461 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.461 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.461 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.461 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.461 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.461 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.462 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.462 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.462 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.462 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.462 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.462 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.462 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.463 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.463 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.463 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.463 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.463 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.463 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.463 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.464 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.464 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.464 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.464 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.464 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.464 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.464 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.465 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.465 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.465 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.465 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.465 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.465 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.465 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.466 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.466 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.466 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.466 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.466 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.466 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.466 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.467 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.467 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.467 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.467 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.467 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.467 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.467 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.468 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.468 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.468 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.468 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.468 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.468 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.468 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.469 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.469 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.469 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.469 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.469 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.469 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.470 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.470 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.470 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.470 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.470 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.470 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.470 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.471 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.471 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.471 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.471 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.471 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.471 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.472 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.472 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.472 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.472 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.472 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.472 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.472 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.473 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.473 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.473 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.473 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.473 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.473 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.473 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.474 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.474 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.474 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.474 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.474 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.474 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.474 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.474 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.475 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.475 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.475 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.475 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.475 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.475 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.476 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.476 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.476 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.476 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.476 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.477 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.477 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.477 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.477 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.477 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.478 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.478 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.478 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.478 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.478 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.478 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.479 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.479 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.479 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.479 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.479 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.480 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.480 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.480 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.480 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.480 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.480 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.481 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.481 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.481 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.481 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.481 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.481 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.481 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.482 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.482 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.482 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.482 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.482 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.482 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.482 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.483 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.483 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.483 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.483 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.483 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.483 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.483 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.483 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.484 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.484 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.484 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.484 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.484 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.484 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.484 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.485 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.485 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.485 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.485 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.485 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.485 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.486 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.486 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.486 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.486 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.486 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.486 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.486 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.486 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.487 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.487 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.487 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.487 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.487 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.487 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.488 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.488 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.488 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.488 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.488 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.488 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.488 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.489 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.489 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.489 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.489 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.489 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.489 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.489 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.490 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.490 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.490 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.490 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.490 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.490 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.490 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.491 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.491 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.491 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.491 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.491 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.491 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.491 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.492 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.492 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.492 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.492 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.492 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.492 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.492 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.493 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.493 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.493 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.493 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.493 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.493 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.493 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.493 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.494 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.494 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.494 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.494 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.494 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.494 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.494 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.495 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.495 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.495 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.495 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.495 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.495 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.495 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.496 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.496 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.496 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.496 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.496 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.496 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.497 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.497 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.497 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.497 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.497 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.497 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.497 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.498 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.498 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.498 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.498 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.498 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.498 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.498 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.499 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.499 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.499 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.499 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.499 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.500 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.500 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.500 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.500 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.500 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.501 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.501 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.501 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.501 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.501 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.501 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.502 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.502 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.502 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.502 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.502 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.502 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.502 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.503 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.503 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.503 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.503 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.503 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.503 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.503 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.504 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.504 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.504 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.504 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.504 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.504 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.505 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.505 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.505 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.505 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.505 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.505 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.506 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.506 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.506 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.506 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.506 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.506 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.507 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.507 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.507 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.507 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.507 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.507 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.507 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.508 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.508 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.508 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.508 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.508 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.508 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.509 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.509 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.509 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.509 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.509 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.510 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.510 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.510 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.510 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.510 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.511 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.511 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.511 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.511 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.511 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.511 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.512 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.512 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.512 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.512 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.512 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.512 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.513 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.513 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.513 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.513 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.513 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.513 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.513 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.514 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.514 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.514 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.514 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.514 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.515 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.515 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.515 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.515 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.515 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.515 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.515 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.516 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.516 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.516 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.516 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.516 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.516 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.516 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.517 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.517 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.517 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.517 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.517 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.517 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.517 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.518 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.518 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.518 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.518 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.518 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.519 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.519 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.519 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.519 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.519 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.519 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.519 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.520 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.520 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.520 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.520 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.520 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.520 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.520 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.521 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.521 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.521 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.521 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.521 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.521 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.521 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.522 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.522 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.522 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.522 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.522 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.522 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.522 2 DEBUG oslo_service.service [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.523 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.547 2 DEBUG nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.547 2 DEBUG nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.547 2 DEBUG nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.548 2 DEBUG nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 02 19:09:59 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 02 19:09:59 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.630 2 DEBUG nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd0950b09d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.633 2 DEBUG nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd0950b09d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.633 2 INFO nova.virt.libvirt.driver [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Connection event '1' reason 'None'
Oct 02 19:09:59 compute-0 sudo[194497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awuvchjgdkjvfmedrnnxbgcxswdytxlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432199.339814-1821-125942787111712/AnsiballZ_podman_container.py'
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.668 2 WARNING nova.virt.libvirt.driver [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 02 19:09:59 compute-0 nova_compute[193839]: 2025-10-02 19:09:59.669 2 DEBUG nova.virt.libvirt.volume.mount [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 02 19:09:59 compute-0 sudo[194497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:09:59 compute-0 python3.9[194501]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 19:10:00 compute-0 sudo[194497]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:00 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.499 2 INFO nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Libvirt host capabilities <capabilities>
Oct 02 19:10:00 compute-0 nova_compute[193839]: 
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <host>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <uuid>f951c71c-b207-47a8-9e73-3e13df1d111a</uuid>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <cpu>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <arch>x86_64</arch>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model>EPYC-Rome-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <vendor>AMD</vendor>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <microcode version='16777317'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <signature family='23' model='49' stepping='0'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='x2apic'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='tsc-deadline'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='osxsave'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='hypervisor'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='tsc_adjust'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='spec-ctrl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='stibp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='arch-capabilities'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='ssbd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='cmp_legacy'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='topoext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='virt-ssbd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='lbrv'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='tsc-scale'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='vmcb-clean'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='pause-filter'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='pfthreshold'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='svme-addr-chk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='rdctl-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='skip-l1dfl-vmentry'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='mds-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature name='pschange-mc-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <pages unit='KiB' size='4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <pages unit='KiB' size='2048'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <pages unit='KiB' size='1048576'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </cpu>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <power_management>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <suspend_mem/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <suspend_disk/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <suspend_hybrid/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </power_management>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <iommu support='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <migration_features>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <live/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <uri_transports>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <uri_transport>tcp</uri_transport>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <uri_transport>rdma</uri_transport>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </uri_transports>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </migration_features>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <topology>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <cells num='1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <cell id='0'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:           <memory unit='KiB'>7864092</memory>
Oct 02 19:10:00 compute-0 nova_compute[193839]:           <pages unit='KiB' size='4'>1966023</pages>
Oct 02 19:10:00 compute-0 nova_compute[193839]:           <pages unit='KiB' size='2048'>0</pages>
Oct 02 19:10:00 compute-0 nova_compute[193839]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 02 19:10:00 compute-0 nova_compute[193839]:           <distances>
Oct 02 19:10:00 compute-0 nova_compute[193839]:             <sibling id='0' value='10'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:           </distances>
Oct 02 19:10:00 compute-0 nova_compute[193839]:           <cpus num='8'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:           </cpus>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         </cell>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </cells>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </topology>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <cache>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </cache>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <secmodel>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model>selinux</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <doi>0</doi>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </secmodel>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <secmodel>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model>dac</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <doi>0</doi>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </secmodel>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </host>
Oct 02 19:10:00 compute-0 nova_compute[193839]: 
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <guest>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <os_type>hvm</os_type>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <arch name='i686'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <wordsize>32</wordsize>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <domain type='qemu'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <domain type='kvm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </arch>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <features>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <pae/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <nonpae/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <acpi default='on' toggle='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <apic default='on' toggle='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <cpuselection/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <deviceboot/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <disksnapshot default='on' toggle='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <externalSnapshot/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </features>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </guest>
Oct 02 19:10:00 compute-0 nova_compute[193839]: 
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <guest>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <os_type>hvm</os_type>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <arch name='x86_64'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <wordsize>64</wordsize>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <domain type='qemu'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <domain type='kvm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </arch>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <features>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <acpi default='on' toggle='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <apic default='on' toggle='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <cpuselection/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <deviceboot/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <disksnapshot default='on' toggle='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <externalSnapshot/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </features>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </guest>
Oct 02 19:10:00 compute-0 nova_compute[193839]: 
Oct 02 19:10:00 compute-0 nova_compute[193839]: </capabilities>
Oct 02 19:10:00 compute-0 nova_compute[193839]: 
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.508 2 DEBUG nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.535 2 DEBUG nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 02 19:10:00 compute-0 nova_compute[193839]: <domainCapabilities>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <domain>kvm</domain>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <arch>i686</arch>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <vcpu max='4096'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <iothreads supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <os supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <enum name='firmware'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <loader supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>rom</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>pflash</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='readonly'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>yes</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>no</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='secure'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>no</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </loader>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </os>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <cpu>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>on</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>off</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='maximum' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='maximumMigratable'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>on</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>off</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='host-model' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <vendor>AMD</vendor>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='x2apic'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='stibp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='ssbd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='succor'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='ibrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='lbrv'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='mds-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='gds-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='custom' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cooperlake'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cooperlake-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cooperlake-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Dhyana-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Genoa'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amd-psfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='auto-ibrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='stibp-always-on'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amd-psfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='auto-ibrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='stibp-always-on'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Milan'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amd-psfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='stibp-always-on'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='GraniteRapids'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='prefetchiti'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='prefetchiti'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10-128'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10-256'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10-512'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='prefetchiti'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='KnightsMill'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512er'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512pf'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='KnightsMill-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512er'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512pf'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tbm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tbm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SierraForest'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cmpccxadd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SierraForest-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cmpccxadd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='athlon'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='athlon-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='core2duo'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='core2duo-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='coreduo'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='coreduo-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='n270'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='n270-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='phenom'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='phenom-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </cpu>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <memoryBacking supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <enum name='sourceType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>file</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>anonymous</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>memfd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </memoryBacking>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <devices>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <disk supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='diskDevice'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>disk</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>cdrom</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>floppy</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>lun</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='bus'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>fdc</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>scsi</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>usb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>sata</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-non-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </disk>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <graphics supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vnc</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>egl-headless</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>dbus</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </graphics>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <video supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='modelType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vga</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>cirrus</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>none</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>bochs</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>ramfb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </video>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <hostdev supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='mode'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>subsystem</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='startupPolicy'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>default</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>mandatory</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>requisite</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>optional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='subsysType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>usb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>pci</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>scsi</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='capsType'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='pciBackend'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </hostdev>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <rng supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-non-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendModel'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>random</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>egd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>builtin</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </rng>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <filesystem supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='driverType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>path</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>handle</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtiofs</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </filesystem>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <tpm supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>tpm-tis</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>tpm-crb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendModel'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>emulator</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>external</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendVersion'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>2.0</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </tpm>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <redirdev supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='bus'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>usb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </redirdev>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <channel supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>pty</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>unix</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </channel>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <crypto supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>qemu</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendModel'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>builtin</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </crypto>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <interface supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>default</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>passt</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </interface>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <panic supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>isa</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>hyperv</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </panic>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </devices>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <features>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <gic supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <vmcoreinfo supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <genid supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <backingStoreInput supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <backup supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <async-teardown supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <ps2 supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <sev supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <sgx supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <hyperv supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='features'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>relaxed</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vapic</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>spinlocks</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vpindex</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>runtime</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>synic</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>stimer</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>reset</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vendor_id</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>frequencies</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>reenlightenment</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>tlbflush</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>ipi</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>avic</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>emsr_bitmap</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>xmm_input</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </hyperv>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <launchSecurity supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </features>
Oct 02 19:10:00 compute-0 nova_compute[193839]: </domainCapabilities>
Oct 02 19:10:00 compute-0 nova_compute[193839]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.542 2 DEBUG nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 02 19:10:00 compute-0 nova_compute[193839]: <domainCapabilities>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <domain>kvm</domain>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <arch>i686</arch>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <vcpu max='240'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <iothreads supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <os supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <enum name='firmware'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <loader supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>rom</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>pflash</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='readonly'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>yes</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>no</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='secure'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>no</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </loader>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </os>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <cpu>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>on</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>off</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='maximum' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='maximumMigratable'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>on</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>off</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='host-model' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <vendor>AMD</vendor>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='x2apic'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='stibp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='ssbd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='succor'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='ibrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='lbrv'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='mds-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='gds-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='custom' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cooperlake'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cooperlake-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cooperlake-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Dhyana-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Genoa'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amd-psfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='auto-ibrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='stibp-always-on'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amd-psfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='auto-ibrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='stibp-always-on'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Milan'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amd-psfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='stibp-always-on'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='GraniteRapids'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='prefetchiti'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='prefetchiti'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10-128'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10-256'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10-512'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='prefetchiti'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='KnightsMill'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512er'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512pf'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='KnightsMill-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512er'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512pf'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tbm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tbm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SierraForest'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cmpccxadd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SierraForest-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cmpccxadd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='athlon'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='athlon-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='core2duo'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='core2duo-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='coreduo'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='coreduo-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='n270'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='n270-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='phenom'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='phenom-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </cpu>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <memoryBacking supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <enum name='sourceType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>file</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>anonymous</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>memfd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </memoryBacking>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <devices>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <disk supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='diskDevice'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>disk</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>cdrom</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>floppy</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>lun</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='bus'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>ide</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>fdc</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>scsi</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>usb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>sata</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-non-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </disk>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <graphics supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vnc</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>egl-headless</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>dbus</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </graphics>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <video supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='modelType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vga</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>cirrus</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>none</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>bochs</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>ramfb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </video>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <hostdev supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='mode'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>subsystem</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='startupPolicy'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>default</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>mandatory</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>requisite</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>optional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='subsysType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>usb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>pci</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>scsi</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='capsType'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='pciBackend'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </hostdev>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <rng supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-non-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendModel'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>random</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>egd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>builtin</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </rng>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <filesystem supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='driverType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>path</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>handle</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtiofs</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </filesystem>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <tpm supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>tpm-tis</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>tpm-crb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendModel'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>emulator</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>external</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendVersion'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>2.0</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </tpm>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <redirdev supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='bus'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>usb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </redirdev>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <channel supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>pty</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>unix</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </channel>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <crypto supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>qemu</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendModel'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>builtin</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </crypto>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <interface supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>default</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>passt</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </interface>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <panic supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>isa</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>hyperv</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </panic>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </devices>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <features>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <gic supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <vmcoreinfo supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <genid supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <backingStoreInput supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <backup supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <async-teardown supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <ps2 supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <sev supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <sgx supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <hyperv supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='features'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>relaxed</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vapic</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>spinlocks</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vpindex</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>runtime</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>synic</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>stimer</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>reset</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vendor_id</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>frequencies</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>reenlightenment</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>tlbflush</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>ipi</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>avic</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>emsr_bitmap</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>xmm_input</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </hyperv>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <launchSecurity supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </features>
Oct 02 19:10:00 compute-0 nova_compute[193839]: </domainCapabilities>
Oct 02 19:10:00 compute-0 nova_compute[193839]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.579 2 DEBUG nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.584 2 DEBUG nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 02 19:10:00 compute-0 nova_compute[193839]: <domainCapabilities>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <domain>kvm</domain>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <arch>x86_64</arch>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <vcpu max='4096'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <iothreads supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <os supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <enum name='firmware'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>efi</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <loader supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>rom</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>pflash</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='readonly'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>yes</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>no</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='secure'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>yes</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>no</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </loader>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </os>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <cpu>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>on</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>off</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='maximum' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='maximumMigratable'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>on</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>off</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='host-model' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <vendor>AMD</vendor>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='x2apic'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='stibp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='ssbd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='succor'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='ibrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='lbrv'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='mds-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='gds-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='custom' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cooperlake'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cooperlake-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cooperlake-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Dhyana-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Genoa'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amd-psfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='auto-ibrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='stibp-always-on'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amd-psfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='auto-ibrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='stibp-always-on'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Milan'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amd-psfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='stibp-always-on'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='GraniteRapids'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='prefetchiti'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='prefetchiti'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10-128'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10-256'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10-512'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='prefetchiti'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 sudo[194694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixzeifxhgkmdhnpbjffrphezqjgebelj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432200.3180013-1829-133362316064827/AnsiballZ_systemd.py'
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 sudo[194694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='KnightsMill'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512er'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512pf'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='KnightsMill-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512er'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512pf'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tbm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tbm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SierraForest'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cmpccxadd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SierraForest-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cmpccxadd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='athlon'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='athlon-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='core2duo'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='core2duo-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='coreduo'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='coreduo-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='n270'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='n270-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='phenom'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='phenom-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </cpu>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <memoryBacking supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <enum name='sourceType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>file</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>anonymous</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>memfd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </memoryBacking>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <devices>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <disk supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='diskDevice'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>disk</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>cdrom</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>floppy</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>lun</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='bus'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>fdc</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>scsi</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>usb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>sata</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-non-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </disk>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <graphics supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vnc</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>egl-headless</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>dbus</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </graphics>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <video supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='modelType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vga</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>cirrus</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>none</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>bochs</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>ramfb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </video>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <hostdev supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='mode'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>subsystem</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='startupPolicy'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>default</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>mandatory</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>requisite</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>optional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='subsysType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>usb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>pci</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>scsi</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='capsType'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='pciBackend'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </hostdev>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <rng supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-non-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendModel'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>random</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>egd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>builtin</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </rng>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <filesystem supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='driverType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>path</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>handle</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtiofs</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </filesystem>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <tpm supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>tpm-tis</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>tpm-crb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendModel'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>emulator</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>external</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendVersion'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>2.0</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </tpm>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <redirdev supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='bus'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>usb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </redirdev>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <channel supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>pty</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>unix</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </channel>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <crypto supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>qemu</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendModel'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>builtin</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </crypto>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <interface supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>default</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>passt</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </interface>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <panic supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>isa</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>hyperv</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </panic>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </devices>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <features>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <gic supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <vmcoreinfo supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <genid supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <backingStoreInput supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <backup supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <async-teardown supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <ps2 supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <sev supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <sgx supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <hyperv supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='features'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>relaxed</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vapic</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>spinlocks</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vpindex</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>runtime</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>synic</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>stimer</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>reset</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vendor_id</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>frequencies</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>reenlightenment</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>tlbflush</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>ipi</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>avic</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>emsr_bitmap</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>xmm_input</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </hyperv>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <launchSecurity supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </features>
Oct 02 19:10:00 compute-0 nova_compute[193839]: </domainCapabilities>
Oct 02 19:10:00 compute-0 nova_compute[193839]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.653 2 DEBUG nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 02 19:10:00 compute-0 nova_compute[193839]: <domainCapabilities>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <domain>kvm</domain>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <arch>x86_64</arch>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <vcpu max='240'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <iothreads supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <os supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <enum name='firmware'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <loader supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>rom</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>pflash</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='readonly'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>yes</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>no</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='secure'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>no</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </loader>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </os>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <cpu>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>on</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>off</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='maximum' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='maximumMigratable'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>on</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>off</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='host-model' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <vendor>AMD</vendor>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='x2apic'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='stibp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='ssbd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='succor'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='ibrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='lbrv'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='mds-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='gds-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <mode name='custom' supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Broadwell-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cooperlake'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cooperlake-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Cooperlake-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Denverton-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Dhyana-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Genoa'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amd-psfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='auto-ibrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='stibp-always-on'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amd-psfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='auto-ibrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='stibp-always-on'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Milan'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amd-psfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='stibp-always-on'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='EPYC-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='GraniteRapids'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='prefetchiti'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='prefetchiti'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10-128'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10-256'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx10-512'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='prefetchiti'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Haswell-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='IvyBridge-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='KnightsMill'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512er'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512pf'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='KnightsMill-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512er'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512pf'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tbm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fma4'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tbm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xop'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='amx-tile'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-bf16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-fp16'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bitalg'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrc'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fzrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='la57'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='taa-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xfd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SierraForest'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cmpccxadd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='SierraForest-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ifma'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cmpccxadd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fbsdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='fsrs'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ibrs-all'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mcdt-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pbrsb-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='psdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='serialize'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vaes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='hle'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='rtm'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512bw'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512cd'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512dq'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512f'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='avx512vl'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='invpcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pcid'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='pku'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='mpx'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v2'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v3'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='core-capability'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='split-lock-detect'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='Snowridge-v4'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='cldemote'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='erms'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='gfni'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdir64b'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='movdiri'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='xsaves'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='athlon'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='athlon-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='core2duo'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='core2duo-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='coreduo'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='coreduo-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='n270'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='n270-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='ss'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='phenom'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <blockers model='phenom-v1'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnow'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <feature name='3dnowext'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </blockers>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </mode>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </cpu>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <memoryBacking supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <enum name='sourceType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>file</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>anonymous</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <value>memfd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </memoryBacking>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <devices>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <disk supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='diskDevice'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>disk</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>cdrom</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>floppy</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>lun</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='bus'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>ide</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>fdc</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>scsi</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>usb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>sata</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-non-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </disk>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <graphics supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vnc</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>egl-headless</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>dbus</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </graphics>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <video supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='modelType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vga</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>cirrus</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>none</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>bochs</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>ramfb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </video>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <hostdev supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='mode'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>subsystem</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='startupPolicy'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>default</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>mandatory</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>requisite</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>optional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='subsysType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>usb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>pci</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>scsi</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='capsType'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='pciBackend'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </hostdev>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <rng supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtio-non-transitional</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendModel'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>random</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>egd</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>builtin</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </rng>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <filesystem supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='driverType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>path</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>handle</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>virtiofs</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </filesystem>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <tpm supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>tpm-tis</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>tpm-crb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendModel'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>emulator</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>external</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendVersion'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>2.0</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </tpm>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <redirdev supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='bus'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>usb</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </redirdev>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <channel supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>pty</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>unix</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </channel>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <crypto supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='type'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>qemu</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendModel'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>builtin</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </crypto>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <interface supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='backendType'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>default</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>passt</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </interface>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <panic supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='model'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>isa</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>hyperv</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </panic>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </devices>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   <features>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <gic supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <vmcoreinfo supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <genid supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <backingStoreInput supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <backup supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <async-teardown supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <ps2 supported='yes'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <sev supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <sgx supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <hyperv supported='yes'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       <enum name='features'>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>relaxed</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vapic</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>spinlocks</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vpindex</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>runtime</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>synic</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>stimer</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>reset</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>vendor_id</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>frequencies</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>reenlightenment</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>tlbflush</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>ipi</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>avic</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>emsr_bitmap</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:         <value>xmm_input</value>
Oct 02 19:10:00 compute-0 nova_compute[193839]:       </enum>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     </hyperv>
Oct 02 19:10:00 compute-0 nova_compute[193839]:     <launchSecurity supported='no'/>
Oct 02 19:10:00 compute-0 nova_compute[193839]:   </features>
Oct 02 19:10:00 compute-0 nova_compute[193839]: </domainCapabilities>
Oct 02 19:10:00 compute-0 nova_compute[193839]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.705 2 DEBUG nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.705 2 INFO nova.virt.libvirt.host [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Secure Boot support detected
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.708 2 INFO nova.virt.libvirt.driver [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.708 2 INFO nova.virt.libvirt.driver [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.718 2 DEBUG nova.virt.libvirt.driver [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.773 2 INFO nova.virt.node [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Determined node identity 828c5fec-9680-4b70-a7ce-11a1217a9c75 from /var/lib/nova/compute_id
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.802 2 WARNING nova.compute.manager [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Compute nodes ['828c5fec-9680-4b70-a7ce-11a1217a9c75'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.848 2 INFO nova.compute.manager [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.893 2 WARNING nova.compute.manager [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.893 2 DEBUG oslo_concurrency.lockutils [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.894 2 DEBUG oslo_concurrency.lockutils [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.894 2 DEBUG oslo_concurrency.lockutils [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:10:00 compute-0 nova_compute[193839]: 2025-10-02 19:10:00.894 2 DEBUG nova.compute.resource_tracker [None req-29d30aec-b35e-40a9-9247-50d87a72cbab - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:10:00 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 02 19:10:00 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 02 19:10:00 compute-0 python3.9[194696]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:10:01 compute-0 systemd[1]: Stopping nova_compute container...
Oct 02 19:10:01 compute-0 nova_compute[193839]: 2025-10-02 19:10:01.134 2 DEBUG oslo_concurrency.lockutils [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:10:01 compute-0 nova_compute[193839]: 2025-10-02 19:10:01.135 2 DEBUG oslo_concurrency.lockutils [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:10:01 compute-0 nova_compute[193839]: 2025-10-02 19:10:01.135 2 DEBUG oslo_concurrency.lockutils [None req-1c01373d-6de5-419d-a3b0-715d7e7b3e55 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:10:01 compute-0 virtqemud[194432]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 02 19:10:01 compute-0 virtqemud[194432]: hostname: compute-0
Oct 02 19:10:01 compute-0 virtqemud[194432]: End of file while reading data: Input/output error
Oct 02 19:10:01 compute-0 systemd[1]: libpod-ce1835eed7d311b8e692dd5ac3b82871f3aef0cff9abee9e6775d6b93e52b1ea.scope: Deactivated successfully.
Oct 02 19:10:01 compute-0 systemd[1]: libpod-ce1835eed7d311b8e692dd5ac3b82871f3aef0cff9abee9e6775d6b93e52b1ea.scope: Consumed 2.935s CPU time.
Oct 02 19:10:01 compute-0 podman[194720]: 2025-10-02 19:10:01.523073904 +0000 UTC m=+0.457782586 container died ce1835eed7d311b8e692dd5ac3b82871f3aef0cff9abee9e6775d6b93e52b1ea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 19:10:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ce1835eed7d311b8e692dd5ac3b82871f3aef0cff9abee9e6775d6b93e52b1ea-userdata-shm.mount: Deactivated successfully.
Oct 02 19:10:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-61282440c4581a4711513ce3c56f64afad4ceaa38ea9d6af3b12a928331df2aa-merged.mount: Deactivated successfully.
Oct 02 19:10:01 compute-0 podman[194720]: 2025-10-02 19:10:01.581229131 +0000 UTC m=+0.515937803 container cleanup ce1835eed7d311b8e692dd5ac3b82871f3aef0cff9abee9e6775d6b93e52b1ea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Oct 02 19:10:01 compute-0 podman[194720]: nova_compute
Oct 02 19:10:01 compute-0 podman[194752]: nova_compute
Oct 02 19:10:01 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 02 19:10:01 compute-0 systemd[1]: Stopped nova_compute container.
Oct 02 19:10:01 compute-0 systemd[1]: Starting nova_compute container...
Oct 02 19:10:01 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61282440c4581a4711513ce3c56f64afad4ceaa38ea9d6af3b12a928331df2aa/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61282440c4581a4711513ce3c56f64afad4ceaa38ea9d6af3b12a928331df2aa/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61282440c4581a4711513ce3c56f64afad4ceaa38ea9d6af3b12a928331df2aa/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61282440c4581a4711513ce3c56f64afad4ceaa38ea9d6af3b12a928331df2aa/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61282440c4581a4711513ce3c56f64afad4ceaa38ea9d6af3b12a928331df2aa/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:01 compute-0 podman[194765]: 2025-10-02 19:10:01.773828806 +0000 UTC m=+0.094420350 container init ce1835eed7d311b8e692dd5ac3b82871f3aef0cff9abee9e6775d6b93e52b1ea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3)
Oct 02 19:10:01 compute-0 podman[194765]: 2025-10-02 19:10:01.781405165 +0000 UTC m=+0.101996699 container start ce1835eed7d311b8e692dd5ac3b82871f3aef0cff9abee9e6775d6b93e52b1ea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:10:01 compute-0 podman[194765]: nova_compute
Oct 02 19:10:01 compute-0 nova_compute[194781]: + sudo -E kolla_set_configs
Oct 02 19:10:01 compute-0 systemd[1]: Started nova_compute container.
Oct 02 19:10:01 compute-0 sudo[194694]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Validating config file
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Copying service configuration files
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Deleting /etc/ceph
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Creating directory /etc/ceph
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Setting permission for /etc/ceph
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Writing out command to execute
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:10:01 compute-0 nova_compute[194781]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 19:10:01 compute-0 nova_compute[194781]: ++ cat /run_command
Oct 02 19:10:01 compute-0 nova_compute[194781]: + CMD=nova-compute
Oct 02 19:10:01 compute-0 nova_compute[194781]: + ARGS=
Oct 02 19:10:01 compute-0 nova_compute[194781]: + sudo kolla_copy_cacerts
Oct 02 19:10:01 compute-0 nova_compute[194781]: + [[ ! -n '' ]]
Oct 02 19:10:01 compute-0 nova_compute[194781]: + . kolla_extend_start
Oct 02 19:10:01 compute-0 nova_compute[194781]: Running command: 'nova-compute'
Oct 02 19:10:01 compute-0 nova_compute[194781]: + echo 'Running command: '\''nova-compute'\'''
Oct 02 19:10:01 compute-0 nova_compute[194781]: + umask 0022
Oct 02 19:10:01 compute-0 nova_compute[194781]: + exec nova-compute
Oct 02 19:10:02 compute-0 sudo[194942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udgyoahjgppcbanegqeokgephukigega ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432202.042112-1838-172951498305699/AnsiballZ_podman_container.py'
Oct 02 19:10:02 compute-0 sudo[194942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:02 compute-0 python3.9[194944]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 19:10:02 compute-0 systemd[1]: Started libpod-conmon-51c7610a0131de1adff145c4c0e5b9949f99d5681392fac06cf03e6c3a83d292.scope.
Oct 02 19:10:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cfbaadae9b432b9e0441e67cc9ae799ad7466c62a2c73a4bbd5b11a283636e3/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cfbaadae9b432b9e0441e67cc9ae799ad7466c62a2c73a4bbd5b11a283636e3/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cfbaadae9b432b9e0441e67cc9ae799ad7466c62a2c73a4bbd5b11a283636e3/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 02 19:10:02 compute-0 podman[194971]: 2025-10-02 19:10:02.766430369 +0000 UTC m=+0.113758047 container init 51c7610a0131de1adff145c4c0e5b9949f99d5681392fac06cf03e6c3a83d292 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:02 compute-0 podman[194971]: 2025-10-02 19:10:02.773259688 +0000 UTC m=+0.120587336 container start 51c7610a0131de1adff145c4c0e5b9949f99d5681392fac06cf03e6c3a83d292 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:10:02 compute-0 python3.9[194944]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 02 19:10:02 compute-0 nova_compute_init[194992]: INFO:nova_statedir:Applying nova statedir ownership
Oct 02 19:10:02 compute-0 nova_compute_init[194992]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 02 19:10:02 compute-0 nova_compute_init[194992]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 02 19:10:02 compute-0 nova_compute_init[194992]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 02 19:10:02 compute-0 nova_compute_init[194992]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 02 19:10:02 compute-0 nova_compute_init[194992]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 02 19:10:02 compute-0 nova_compute_init[194992]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 02 19:10:02 compute-0 nova_compute_init[194992]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 02 19:10:02 compute-0 nova_compute_init[194992]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 02 19:10:02 compute-0 nova_compute_init[194992]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 02 19:10:02 compute-0 nova_compute_init[194992]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 02 19:10:02 compute-0 nova_compute_init[194992]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 02 19:10:02 compute-0 nova_compute_init[194992]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 02 19:10:02 compute-0 nova_compute_init[194992]: INFO:nova_statedir:Nova statedir ownership complete
Oct 02 19:10:02 compute-0 systemd[1]: libpod-51c7610a0131de1adff145c4c0e5b9949f99d5681392fac06cf03e6c3a83d292.scope: Deactivated successfully.
Oct 02 19:10:02 compute-0 podman[195006]: 2025-10-02 19:10:02.862385317 +0000 UTC m=+0.021568067 container died 51c7610a0131de1adff145c4c0e5b9949f99d5681392fac06cf03e6c3a83d292 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3)
Oct 02 19:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-51c7610a0131de1adff145c4c0e5b9949f99d5681392fac06cf03e6c3a83d292-userdata-shm.mount: Deactivated successfully.
Oct 02 19:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cfbaadae9b432b9e0441e67cc9ae799ad7466c62a2c73a4bbd5b11a283636e3-merged.mount: Deactivated successfully.
Oct 02 19:10:02 compute-0 podman[195006]: 2025-10-02 19:10:02.894996823 +0000 UTC m=+0.054179543 container cleanup 51c7610a0131de1adff145c4c0e5b9949f99d5681392fac06cf03e6c3a83d292 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 19:10:02 compute-0 systemd[1]: libpod-conmon-51c7610a0131de1adff145c4c0e5b9949f99d5681392fac06cf03e6c3a83d292.scope: Deactivated successfully.
Oct 02 19:10:02 compute-0 sudo[194942]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:03 compute-0 sshd-session[160384]: Connection closed by 192.168.122.30 port 41996
Oct 02 19:10:03 compute-0 sshd-session[160381]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:10:03 compute-0 systemd-logind[798]: Session 24 logged out. Waiting for processes to exit.
Oct 02 19:10:03 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Oct 02 19:10:03 compute-0 systemd[1]: session-24.scope: Consumed 2min 33.695s CPU time.
Oct 02 19:10:03 compute-0 systemd-logind[798]: Removed session 24.
Oct 02 19:10:03 compute-0 nova_compute[194781]: 2025-10-02 19:10:03.878 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 19:10:03 compute-0 nova_compute[194781]: 2025-10-02 19:10:03.879 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 19:10:03 compute-0 nova_compute[194781]: 2025-10-02 19:10:03.879 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 19:10:03 compute-0 nova_compute[194781]: 2025-10-02 19:10:03.879 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 02 19:10:03 compute-0 nova_compute[194781]: 2025-10-02 19:10:03.999 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.024 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.530 2 INFO nova.virt.driver [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.658 2 INFO nova.compute.provider_config [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.675 2 DEBUG oslo_concurrency.lockutils [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.675 2 DEBUG oslo_concurrency.lockutils [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.675 2 DEBUG oslo_concurrency.lockutils [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.676 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.676 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.676 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.676 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.676 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.676 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.677 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.677 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.677 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.677 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.677 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.677 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.677 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.678 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.678 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.678 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.678 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.678 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.678 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.678 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.679 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.679 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.679 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.679 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.679 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.679 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.679 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.680 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.680 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.680 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.680 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.680 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.680 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.681 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.681 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.681 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.681 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.681 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.681 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.682 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.682 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.682 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.682 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.682 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.682 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.682 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.683 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.683 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.683 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.683 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.683 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.683 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.683 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.684 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.684 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.684 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.684 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.684 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.684 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.684 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.685 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.685 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.685 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.685 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.685 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.685 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.685 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.685 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.686 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.686 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.686 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.686 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.686 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.686 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.686 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.687 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.687 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.687 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.687 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.687 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.687 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.688 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.688 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.688 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.688 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.688 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.688 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.688 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.689 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.689 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.689 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.689 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.689 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.689 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.689 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.690 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.690 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.690 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.690 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.690 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.691 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.691 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.691 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.691 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.691 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.691 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.691 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.691 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.692 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.692 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.692 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.692 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.692 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.692 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.692 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.693 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.693 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.693 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.693 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.693 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.693 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.693 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.694 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.694 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.694 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.694 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.694 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.694 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.694 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.695 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.695 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.695 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.695 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.695 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.695 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.695 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.695 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.696 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.696 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.696 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.696 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.696 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.696 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.696 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.697 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.697 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.697 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.697 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.697 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.697 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.698 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.698 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.698 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.698 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.698 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.698 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.698 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.699 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.699 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.699 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.699 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.699 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.699 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.699 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.700 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.700 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.700 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.700 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.700 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.700 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.700 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.701 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.701 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.701 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.701 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.701 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.701 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.701 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.702 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.702 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.702 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.702 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.702 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.702 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.702 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.703 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.703 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.703 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.703 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.703 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.703 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.703 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.704 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.704 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.704 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.704 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.704 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.704 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.704 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.705 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.705 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.705 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.705 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.705 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.705 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.705 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.706 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.706 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.706 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.706 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.706 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.706 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.707 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.707 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.707 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.707 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.707 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.707 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.707 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.708 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.708 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.708 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.708 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.708 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.708 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.708 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.709 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.709 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.709 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.709 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.709 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.709 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.709 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.710 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.710 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.710 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.710 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.710 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.710 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.711 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.711 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.711 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.711 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.711 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.711 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.712 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.712 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.712 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.712 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.712 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.712 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.712 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.712 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.713 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.713 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.713 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.713 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.713 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.713 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.714 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.714 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.714 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.714 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.714 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.714 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.714 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.715 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.715 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.715 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.715 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.715 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.715 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.715 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.716 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.716 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.716 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.716 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.716 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.716 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.717 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.717 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.717 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.717 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.717 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.717 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.717 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.718 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.718 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.718 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.718 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.718 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.718 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.718 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.719 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.719 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.719 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.719 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.719 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.719 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.719 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.720 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.720 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.720 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.720 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.720 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.720 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.721 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.721 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.721 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.721 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.721 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.721 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.721 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.722 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.722 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.722 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.722 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.722 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.722 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.723 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.723 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.723 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.723 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.723 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.723 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.723 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.724 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.724 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.724 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.724 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.724 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.724 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.724 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.724 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.725 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.725 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.725 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.725 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.725 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.725 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.726 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.726 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.726 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.726 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.726 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.726 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.726 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.727 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.727 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.727 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.727 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.727 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.727 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.727 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.727 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.728 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.728 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.728 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.728 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.728 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.729 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.729 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.729 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.729 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.729 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.729 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.729 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.730 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.730 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.730 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.730 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.730 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.730 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.731 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.731 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.731 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.731 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.731 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.731 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.732 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.732 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.732 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.732 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.732 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.732 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.732 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.733 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.733 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.733 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.733 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.733 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.734 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.734 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.734 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.734 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.734 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.734 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.735 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.735 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.735 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.735 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.735 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.735 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.735 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.736 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.736 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.736 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.736 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.736 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.736 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.736 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.737 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.737 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.737 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.737 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.737 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.737 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.738 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.738 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.738 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.738 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.738 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.738 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.738 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.739 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.739 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.739 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.739 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.739 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.739 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.739 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.740 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.740 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.740 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.740 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.740 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.740 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.740 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.741 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.741 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.741 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.741 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.741 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.741 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.741 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.742 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.742 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.742 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.742 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.742 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.742 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.743 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.743 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.743 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.743 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.743 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.743 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.743 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.744 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.744 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.744 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.744 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.744 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.744 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.744 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.745 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.745 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.745 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.745 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.745 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.746 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.746 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.746 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.746 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.746 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.746 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.746 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.747 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.747 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.747 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.747 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.747 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.747 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.747 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.747 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.748 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.748 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.748 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.748 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.748 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.748 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.749 2 WARNING oslo_config.cfg [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 02 19:10:04 compute-0 nova_compute[194781]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 02 19:10:04 compute-0 nova_compute[194781]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 02 19:10:04 compute-0 nova_compute[194781]: and ``live_migration_inbound_addr`` respectively.
Oct 02 19:10:04 compute-0 nova_compute[194781]: ).  Its value may be silently ignored in the future.
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.749 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.749 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.749 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.749 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.749 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.750 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.750 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.750 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.750 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.750 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.750 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.750 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.751 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.751 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.751 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.751 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.751 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.751 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.751 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.752 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.752 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.752 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.752 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.752 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.752 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.752 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.753 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.753 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.753 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.753 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.753 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.753 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.754 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.754 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.754 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.754 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.754 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.754 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.754 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.755 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.755 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.755 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.755 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.755 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.755 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.755 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.755 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.756 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.756 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.756 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.756 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.756 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.756 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.757 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.757 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.757 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.757 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.757 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.757 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.757 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.758 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.758 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.758 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.758 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.758 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.758 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.758 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.759 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.759 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.759 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.759 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.759 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.759 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.759 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.760 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.760 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.760 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.760 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.760 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.760 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.760 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.760 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.761 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.761 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.761 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.761 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.761 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.761 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.762 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.762 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.762 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.762 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.762 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.762 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.762 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.762 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.763 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.763 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.763 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.763 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.763 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.763 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.763 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.764 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.764 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.764 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.764 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.764 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.764 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.764 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.765 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.765 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.765 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.765 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.765 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.765 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.765 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.766 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.766 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.766 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.766 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.766 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.766 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.767 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.767 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.767 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.767 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.767 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.767 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.767 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.768 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.768 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.768 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.768 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.768 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.768 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.768 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.769 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.769 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.769 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.769 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.769 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.769 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.770 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.770 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.770 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.770 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.770 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.770 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.770 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.771 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.771 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.771 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.771 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.771 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.771 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.772 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.772 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.772 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.772 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.772 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.772 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.772 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.773 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.773 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.773 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.773 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.774 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.775 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.775 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.776 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.776 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.776 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.776 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.777 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.777 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.778 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.778 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.778 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.779 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.779 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.780 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.780 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.780 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.781 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.781 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.781 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.782 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.782 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.782 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.783 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.783 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.783 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.783 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.784 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.784 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.784 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.785 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.785 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.786 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.786 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.786 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.787 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.787 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.787 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.788 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.788 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.788 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.789 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.789 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.789 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.789 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.790 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.790 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.790 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.791 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.791 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.791 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.792 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.792 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.792 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.792 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.793 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.793 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.793 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.794 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.794 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.794 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.795 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.795 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.795 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.795 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.796 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.796 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.797 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.797 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.797 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.798 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.798 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.798 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.798 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.799 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.799 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.799 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.800 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.800 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.800 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.801 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.801 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.802 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.802 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.802 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.803 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.803 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.803 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.804 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.804 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.804 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.805 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.805 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.805 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.806 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.806 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.806 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.806 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.807 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.807 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.807 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.808 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.808 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.808 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.809 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.809 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.809 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.810 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.810 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.810 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.811 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.811 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.812 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.812 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.812 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.813 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.813 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.813 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.814 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.814 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.814 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.815 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.815 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.815 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.816 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.816 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.816 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.817 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.817 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.817 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.818 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.818 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.818 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.819 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.819 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.819 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.820 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.820 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.820 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.820 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.821 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.821 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.821 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.822 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.822 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.822 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.823 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.823 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.823 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.824 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.824 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.824 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.824 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.825 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.825 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.825 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.826 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.826 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.826 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.826 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.826 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.827 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.827 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.827 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.827 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.827 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.828 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.828 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.828 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.828 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.828 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.828 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.829 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.829 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.829 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.829 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.830 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.830 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.830 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.830 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.830 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.830 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.831 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.831 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.831 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.831 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.831 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.832 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.832 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.832 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.832 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.832 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.832 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.833 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.833 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.833 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.833 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.833 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.834 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.834 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.834 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.834 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.834 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.835 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.835 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.835 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.835 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.835 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.836 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.836 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.836 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.836 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.836 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.836 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.837 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.837 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.837 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.837 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.837 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.838 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.838 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.838 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.838 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.838 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.839 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.839 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.839 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.839 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.839 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.840 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.840 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.840 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.840 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.840 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.841 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.841 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.841 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.841 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.841 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.841 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.842 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.842 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.842 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.842 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.842 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.843 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.843 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.843 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.843 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.843 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.844 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.844 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.844 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.844 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.844 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.844 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.845 2 DEBUG oslo_service.service [None req-34712dfe-0fb8-4078-9010-66ea5956125c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.846 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.864 2 INFO nova.virt.node [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Determined node identity 828c5fec-9680-4b70-a7ce-11a1217a9c75 from /var/lib/nova/compute_id
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.865 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.866 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.866 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.867 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.880 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fadb09008b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.882 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fadb09008b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.883 2 INFO nova.virt.libvirt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Connection event '1' reason 'None'
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.895 2 DEBUG nova.virt.libvirt.volume.mount [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.896 2 INFO nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Libvirt host capabilities <capabilities>
Oct 02 19:10:04 compute-0 nova_compute[194781]: 
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <host>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <uuid>f951c71c-b207-47a8-9e73-3e13df1d111a</uuid>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <cpu>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <arch>x86_64</arch>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model>EPYC-Rome-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <vendor>AMD</vendor>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <microcode version='16777317'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <signature family='23' model='49' stepping='0'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='x2apic'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='tsc-deadline'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='osxsave'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='hypervisor'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='tsc_adjust'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='spec-ctrl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='stibp'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='arch-capabilities'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='ssbd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='cmp_legacy'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='topoext'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='virt-ssbd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='lbrv'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='tsc-scale'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='vmcb-clean'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='pause-filter'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='pfthreshold'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='svme-addr-chk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='rdctl-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='skip-l1dfl-vmentry'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='mds-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature name='pschange-mc-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <pages unit='KiB' size='4'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <pages unit='KiB' size='2048'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <pages unit='KiB' size='1048576'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </cpu>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <power_management>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <suspend_mem/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <suspend_disk/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <suspend_hybrid/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </power_management>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <iommu support='no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <migration_features>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <live/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <uri_transports>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <uri_transport>tcp</uri_transport>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <uri_transport>rdma</uri_transport>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </uri_transports>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </migration_features>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <topology>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <cells num='1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <cell id='0'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:           <memory unit='KiB'>7864092</memory>
Oct 02 19:10:04 compute-0 nova_compute[194781]:           <pages unit='KiB' size='4'>1966023</pages>
Oct 02 19:10:04 compute-0 nova_compute[194781]:           <pages unit='KiB' size='2048'>0</pages>
Oct 02 19:10:04 compute-0 nova_compute[194781]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 02 19:10:04 compute-0 nova_compute[194781]:           <distances>
Oct 02 19:10:04 compute-0 nova_compute[194781]:             <sibling id='0' value='10'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:           </distances>
Oct 02 19:10:04 compute-0 nova_compute[194781]:           <cpus num='8'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:           </cpus>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         </cell>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </cells>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </topology>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <cache>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </cache>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <secmodel>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model>selinux</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <doi>0</doi>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </secmodel>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <secmodel>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model>dac</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <doi>0</doi>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </secmodel>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   </host>
Oct 02 19:10:04 compute-0 nova_compute[194781]: 
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <guest>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <os_type>hvm</os_type>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <arch name='i686'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <wordsize>32</wordsize>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <domain type='qemu'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <domain type='kvm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </arch>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <features>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <pae/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <nonpae/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <acpi default='on' toggle='yes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <apic default='on' toggle='no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <cpuselection/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <deviceboot/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <disksnapshot default='on' toggle='no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <externalSnapshot/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </features>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   </guest>
Oct 02 19:10:04 compute-0 nova_compute[194781]: 
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <guest>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <os_type>hvm</os_type>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <arch name='x86_64'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <wordsize>64</wordsize>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <domain type='qemu'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <domain type='kvm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </arch>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <features>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <acpi default='on' toggle='yes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <apic default='on' toggle='no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <cpuselection/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <deviceboot/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <disksnapshot default='on' toggle='no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <externalSnapshot/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </features>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   </guest>
Oct 02 19:10:04 compute-0 nova_compute[194781]: 
Oct 02 19:10:04 compute-0 nova_compute[194781]: </capabilities>
Oct 02 19:10:04 compute-0 nova_compute[194781]: 
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.904 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.908 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 02 19:10:04 compute-0 nova_compute[194781]: <domainCapabilities>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <domain>kvm</domain>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <arch>i686</arch>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <vcpu max='4096'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <iothreads supported='yes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <os supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <enum name='firmware'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <loader supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>rom</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>pflash</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='readonly'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>yes</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>no</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='secure'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>no</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </loader>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   </os>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <cpu>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>on</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>off</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <mode name='maximum' supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='maximumMigratable'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>on</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>off</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <mode name='host-model' supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <vendor>AMD</vendor>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='x2apic'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='stibp'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='ssbd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='succor'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='ibrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='lbrv'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='mds-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='gds-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <mode name='custom' supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cooperlake'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cooperlake-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cooperlake-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Denverton'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Denverton-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Denverton-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Denverton-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Dhyana-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Genoa'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amd-psfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='auto-ibrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='stibp-always-on'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amd-psfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='auto-ibrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='stibp-always-on'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Milan'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amd-psfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='stibp-always-on'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-v4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='GraniteRapids'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='prefetchiti'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='prefetchiti'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx10'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx10-128'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx10-256'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx10-512'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='prefetchiti'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell-IBRS'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell-noTSX'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell-v4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='IvyBridge'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='IvyBridge-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='IvyBridge-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='KnightsMill'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512er'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512pf'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='KnightsMill-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512er'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512pf'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Opteron_G4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Opteron_G5'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tbm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tbm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='SierraForest'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='cmpccxadd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='SierraForest-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='cmpccxadd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Snowridge'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='athlon'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='athlon-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='core2duo'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='core2duo-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='coreduo'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='coreduo-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='n270'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='n270-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='phenom'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='phenom-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <memoryBacking supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <enum name='sourceType'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <value>file</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <value>anonymous</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <value>memfd</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   </memoryBacking>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <disk supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='diskDevice'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>disk</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>cdrom</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>floppy</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>lun</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='bus'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>fdc</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>scsi</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>usb</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>sata</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>virtio-transitional</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>virtio-non-transitional</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <graphics supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>vnc</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>egl-headless</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>dbus</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </graphics>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <video supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='modelType'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>vga</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>cirrus</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>none</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>bochs</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>ramfb</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </video>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <hostdev supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='mode'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>subsystem</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='startupPolicy'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>default</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>mandatory</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>requisite</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>optional</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='subsysType'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>usb</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>pci</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>scsi</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='capsType'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='pciBackend'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </hostdev>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <rng supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>virtio-transitional</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>virtio-non-transitional</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='backendModel'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>random</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>egd</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>builtin</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <filesystem supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='driverType'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>path</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>handle</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>virtiofs</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </filesystem>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <tpm supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>tpm-tis</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>tpm-crb</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='backendModel'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>emulator</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>external</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='backendVersion'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>2.0</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </tpm>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <redirdev supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='bus'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>usb</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </redirdev>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <channel supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>pty</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>unix</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </channel>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <crypto supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='model'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>qemu</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='backendModel'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>builtin</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </crypto>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <interface supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='backendType'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>default</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>passt</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <panic supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>isa</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>hyperv</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </panic>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <features>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <gic supported='no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <vmcoreinfo supported='yes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <genid supported='yes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <backingStoreInput supported='yes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <backup supported='yes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <async-teardown supported='yes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <ps2 supported='yes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <sev supported='no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <sgx supported='no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <hyperv supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='features'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>relaxed</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>vapic</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>spinlocks</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>vpindex</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>runtime</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>synic</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>stimer</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>reset</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>vendor_id</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>frequencies</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>reenlightenment</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>tlbflush</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>ipi</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>avic</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>emsr_bitmap</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>xmm_input</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </hyperv>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <launchSecurity supported='no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   </features>
Oct 02 19:10:04 compute-0 nova_compute[194781]: </domainCapabilities>
Oct 02 19:10:04 compute-0 nova_compute[194781]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:10:04 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.914 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 02 19:10:04 compute-0 nova_compute[194781]: <domainCapabilities>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <domain>kvm</domain>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <arch>i686</arch>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <vcpu max='240'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <iothreads supported='yes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <os supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <enum name='firmware'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <loader supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>rom</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>pflash</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='readonly'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>yes</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>no</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='secure'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>no</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </loader>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   </os>
Oct 02 19:10:04 compute-0 nova_compute[194781]:   <cpu>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>on</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>off</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <mode name='maximum' supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <enum name='maximumMigratable'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>on</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <value>off</value>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <mode name='host-model' supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <vendor>AMD</vendor>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='x2apic'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='stibp'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='ssbd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='succor'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='ibrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='lbrv'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='mds-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='gds-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:04 compute-0 nova_compute[194781]:     <mode name='custom' supported='yes'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cooperlake'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cooperlake-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Cooperlake-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Denverton'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Denverton-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Denverton-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Denverton-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Dhyana-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Genoa'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amd-psfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='auto-ibrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='stibp-always-on'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amd-psfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='auto-ibrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='stibp-always-on'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Milan'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amd-psfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='stibp-always-on'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='EPYC-v4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='GraniteRapids'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='prefetchiti'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='prefetchiti'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx10'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx10-128'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx10-256'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx10-512'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='prefetchiti'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell-IBRS'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell-noTSX'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Haswell-v4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='IvyBridge'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='IvyBridge-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='IvyBridge-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='KnightsMill'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512er'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512pf'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='KnightsMill-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512er'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512pf'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Opteron_G4'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Opteron_G5'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tbm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tbm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:10:04 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:04 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='SierraForest'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cmpccxadd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='SierraForest-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cmpccxadd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='athlon'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='athlon-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='core2duo'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='core2duo-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='coreduo'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='coreduo-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='n270'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='n270-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='phenom'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='phenom-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <memoryBacking supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <enum name='sourceType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>file</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>anonymous</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>memfd</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   </memoryBacking>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <disk supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='diskDevice'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>disk</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>cdrom</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>floppy</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>lun</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='bus'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>ide</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>fdc</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>scsi</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>usb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>sata</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio-transitional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio-non-transitional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <graphics supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vnc</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>egl-headless</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>dbus</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </graphics>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <video supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='modelType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vga</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>cirrus</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>none</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>bochs</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>ramfb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </video>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <hostdev supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='mode'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>subsystem</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='startupPolicy'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>default</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>mandatory</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>requisite</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>optional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='subsysType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>usb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>pci</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>scsi</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='capsType'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='pciBackend'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </hostdev>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <rng supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio-transitional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio-non-transitional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendModel'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>random</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>egd</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>builtin</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <filesystem supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='driverType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>path</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>handle</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtiofs</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </filesystem>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <tpm supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>tpm-tis</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>tpm-crb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendModel'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>emulator</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>external</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendVersion'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>2.0</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </tpm>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <redirdev supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='bus'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>usb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </redirdev>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <channel supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>pty</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>unix</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </channel>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <crypto supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>qemu</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendModel'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>builtin</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </crypto>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <interface supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>default</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>passt</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <panic supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>isa</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>hyperv</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </panic>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <features>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <gic supported='no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <vmcoreinfo supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <genid supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <backingStoreInput supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <backup supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <async-teardown supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <ps2 supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <sev supported='no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <sgx supported='no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <hyperv supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='features'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>relaxed</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vapic</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>spinlocks</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vpindex</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>runtime</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>synic</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>stimer</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>reset</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vendor_id</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>frequencies</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>reenlightenment</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>tlbflush</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>ipi</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>avic</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>emsr_bitmap</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>xmm_input</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </hyperv>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <launchSecurity supported='no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   </features>
Oct 02 19:10:05 compute-0 nova_compute[194781]: </domainCapabilities>
Oct 02 19:10:05 compute-0 nova_compute[194781]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.947 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:04.951 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 02 19:10:05 compute-0 nova_compute[194781]: <domainCapabilities>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <domain>kvm</domain>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <arch>x86_64</arch>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <vcpu max='4096'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <iothreads supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <os supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <enum name='firmware'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>efi</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <loader supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>rom</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>pflash</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='readonly'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>yes</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>no</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='secure'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>yes</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>no</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </loader>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   </os>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <cpu>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>on</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>off</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <mode name='maximum' supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='maximumMigratable'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>on</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>off</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <mode name='host-model' supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <vendor>AMD</vendor>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='x2apic'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='stibp'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='ssbd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='succor'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='ibrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='lbrv'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='mds-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='gds-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <mode name='custom' supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cooperlake'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cooperlake-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cooperlake-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Denverton'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Denverton-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Denverton-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Denverton-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Dhyana-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Genoa'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amd-psfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='auto-ibrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='stibp-always-on'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amd-psfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='auto-ibrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='stibp-always-on'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Milan'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amd-psfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='stibp-always-on'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='GraniteRapids'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='prefetchiti'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='prefetchiti'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx10'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx10-128'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx10-256'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx10-512'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='prefetchiti'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell-noTSX'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='IvyBridge'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='IvyBridge-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='IvyBridge-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='KnightsMill'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512er'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512pf'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='KnightsMill-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512er'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512pf'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Opteron_G4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Opteron_G5'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tbm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tbm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='SierraForest'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cmpccxadd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='SierraForest-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cmpccxadd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='athlon'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='athlon-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='core2duo'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='core2duo-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='coreduo'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='coreduo-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='n270'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='n270-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='phenom'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='phenom-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <memoryBacking supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <enum name='sourceType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>file</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>anonymous</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>memfd</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   </memoryBacking>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <disk supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='diskDevice'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>disk</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>cdrom</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>floppy</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>lun</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='bus'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>fdc</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>scsi</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>usb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>sata</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio-transitional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio-non-transitional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <graphics supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vnc</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>egl-headless</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>dbus</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </graphics>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <video supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='modelType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vga</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>cirrus</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>none</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>bochs</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>ramfb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </video>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <hostdev supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='mode'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>subsystem</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='startupPolicy'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>default</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>mandatory</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>requisite</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>optional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='subsysType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>usb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>pci</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>scsi</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='capsType'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='pciBackend'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </hostdev>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <rng supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio-transitional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio-non-transitional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendModel'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>random</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>egd</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>builtin</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <filesystem supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='driverType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>path</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>handle</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtiofs</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </filesystem>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <tpm supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>tpm-tis</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>tpm-crb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendModel'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>emulator</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>external</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendVersion'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>2.0</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </tpm>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <redirdev supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='bus'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>usb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </redirdev>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <channel supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>pty</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>unix</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </channel>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <crypto supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>qemu</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendModel'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>builtin</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </crypto>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <interface supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>default</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>passt</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <panic supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>isa</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>hyperv</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </panic>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <features>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <gic supported='no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <vmcoreinfo supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <genid supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <backingStoreInput supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <backup supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <async-teardown supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <ps2 supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <sev supported='no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <sgx supported='no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <hyperv supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='features'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>relaxed</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vapic</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>spinlocks</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vpindex</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>runtime</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>synic</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>stimer</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>reset</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vendor_id</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>frequencies</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>reenlightenment</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>tlbflush</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>ipi</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>avic</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>emsr_bitmap</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>xmm_input</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </hyperv>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <launchSecurity supported='no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   </features>
Oct 02 19:10:05 compute-0 nova_compute[194781]: </domainCapabilities>
Oct 02 19:10:05 compute-0 nova_compute[194781]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.013 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 02 19:10:05 compute-0 nova_compute[194781]: <domainCapabilities>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <domain>kvm</domain>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <arch>x86_64</arch>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <vcpu max='240'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <iothreads supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <os supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <enum name='firmware'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <loader supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>rom</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>pflash</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='readonly'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>yes</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>no</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='secure'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>no</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </loader>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   </os>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <cpu>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <mode name='host-passthrough' supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='hostPassthroughMigratable'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>on</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>off</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <mode name='maximum' supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='maximumMigratable'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>on</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>off</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <mode name='host-model' supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <vendor>AMD</vendor>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='x2apic'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='hypervisor'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='stibp'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='ssbd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='overflow-recov'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='succor'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='ibrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='lbrv'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='tsc-scale'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='flushbyasid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='pause-filter'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='pfthreshold'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='rdctl-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='mds-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='gds-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='require' name='rfds-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <feature policy='disable' name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <mode name='custom' supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell-noTSX'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Broadwell-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cooperlake'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cooperlake-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Cooperlake-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Denverton'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Denverton-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Denverton-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Denverton-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Dhyana-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Genoa'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amd-psfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='auto-ibrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='stibp-always-on'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amd-psfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='auto-ibrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='stibp-always-on'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Milan'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Milan-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Milan-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amd-psfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='no-nested-data-bp'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='null-sel-clr-base'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='stibp-always-on'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-Rome-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='EPYC-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='GraniteRapids'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='prefetchiti'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='GraniteRapids-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='prefetchiti'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='GraniteRapids-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx10'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx10-128'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx10-256'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx10-512'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='prefetchiti'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell-noTSX'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Haswell-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v5'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v6'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Icelake-Server-v7'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='IvyBridge'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='IvyBridge-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='IvyBridge-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='IvyBridge-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='KnightsMill'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512er'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512pf'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='KnightsMill-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-4fmaps'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-4vnniw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512er'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512pf'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Opteron_G4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Opteron_G4-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Opteron_G5'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tbm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Opteron_G5-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fma4'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tbm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xop'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='SapphireRapids-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='amx-tile'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-bf16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-fp16'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512-vpopcntdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bitalg'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vbmi2'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrc'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fzrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='la57'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='taa-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='tsx-ldtrk'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xfd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='SierraForest'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cmpccxadd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='SierraForest-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-ifma'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-ne-convert'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx-vnni-int8'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='bus-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cmpccxadd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fbsdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='fsrs'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ibrs-all'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mcdt-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pbrsb-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='psdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='sbdr-ssdp-no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='serialize'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vaes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='vpclmulqdq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Client-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='hle'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='rtm'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Skylake-Server-v5'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512bw'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512cd'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512dq'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512f'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='avx512vl'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='invpcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pcid'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='pku'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='mpx'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v2'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v3'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='core-capability'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='split-lock-detect'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='Snowridge-v4'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='cldemote'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='erms'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='gfni'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdir64b'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='movdiri'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='xsaves'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='athlon'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='athlon-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='core2duo'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='core2duo-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='coreduo'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='coreduo-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='n270'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='n270-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='ss'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='phenom'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <blockers model='phenom-v1'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnow'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <feature name='3dnowext'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </blockers>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </mode>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <memoryBacking supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <enum name='sourceType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>file</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>anonymous</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <value>memfd</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   </memoryBacking>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <disk supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='diskDevice'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>disk</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>cdrom</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>floppy</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>lun</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='bus'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>ide</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>fdc</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>scsi</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>usb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>sata</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio-transitional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio-non-transitional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <graphics supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vnc</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>egl-headless</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>dbus</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </graphics>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <video supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='modelType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vga</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>cirrus</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>none</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>bochs</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>ramfb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </video>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <hostdev supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='mode'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>subsystem</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='startupPolicy'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>default</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>mandatory</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>requisite</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>optional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='subsysType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>usb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>pci</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>scsi</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='capsType'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='pciBackend'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </hostdev>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <rng supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio-transitional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtio-non-transitional</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendModel'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>random</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>egd</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>builtin</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <filesystem supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='driverType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>path</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>handle</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>virtiofs</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </filesystem>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <tpm supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>tpm-tis</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>tpm-crb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendModel'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>emulator</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>external</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendVersion'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>2.0</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </tpm>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <redirdev supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='bus'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>usb</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </redirdev>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <channel supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>pty</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>unix</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </channel>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <crypto supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='type'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>qemu</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendModel'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>builtin</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </crypto>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <interface supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='backendType'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>default</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>passt</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <panic supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='model'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>isa</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>hyperv</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </panic>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   <features>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <gic supported='no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <vmcoreinfo supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <genid supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <backingStoreInput supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <backup supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <async-teardown supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <ps2 supported='yes'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <sev supported='no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <sgx supported='no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <hyperv supported='yes'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       <enum name='features'>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>relaxed</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vapic</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>spinlocks</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vpindex</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>runtime</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>synic</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>stimer</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>reset</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>vendor_id</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>frequencies</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>reenlightenment</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>tlbflush</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>ipi</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>avic</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>emsr_bitmap</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:         <value>xmm_input</value>
Oct 02 19:10:05 compute-0 nova_compute[194781]:       </enum>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     </hyperv>
Oct 02 19:10:05 compute-0 nova_compute[194781]:     <launchSecurity supported='no'/>
Oct 02 19:10:05 compute-0 nova_compute[194781]:   </features>
Oct 02 19:10:05 compute-0 nova_compute[194781]: </domainCapabilities>
Oct 02 19:10:05 compute-0 nova_compute[194781]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.069 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.070 2 INFO nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Secure Boot support detected
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.072 2 INFO nova.virt.libvirt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.072 2 INFO nova.virt.libvirt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.082 2 DEBUG nova.virt.libvirt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.100 2 INFO nova.virt.node [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Determined node identity 828c5fec-9680-4b70-a7ce-11a1217a9c75 from /var/lib/nova/compute_id
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.114 2 WARNING nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Compute nodes ['828c5fec-9680-4b70-a7ce-11a1217a9c75'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.142 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.158 2 WARNING nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.159 2 DEBUG oslo_concurrency.lockutils [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.159 2 DEBUG oslo_concurrency.lockutils [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.159 2 DEBUG oslo_concurrency.lockutils [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.159 2 DEBUG nova.compute.resource_tracker [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.313 2 WARNING nova.virt.libvirt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.314 2 DEBUG nova.compute.resource_tracker [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6067MB free_disk=72.73431396484375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.314 2 DEBUG oslo_concurrency.lockutils [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.314 2 DEBUG oslo_concurrency.lockutils [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.329 2 WARNING nova.compute.resource_tracker [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] No compute node record for compute-0.ctlplane.example.com:828c5fec-9680-4b70-a7ce-11a1217a9c75: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 828c5fec-9680-4b70-a7ce-11a1217a9c75 could not be found.
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.349 2 INFO nova.compute.resource_tracker [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 828c5fec-9680-4b70-a7ce-11a1217a9c75
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.456 2 DEBUG nova.compute.resource_tracker [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:10:05 compute-0 nova_compute[194781]: 2025-10-02 19:10:05.457 2 DEBUG nova.compute.resource_tracker [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:10:06 compute-0 nova_compute[194781]: 2025-10-02 19:10:06.464 2 INFO nova.scheduler.client.report [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [req-154a692d-0f41-4aeb-94b4-6412dd36797c] Created resource provider record via placement API for resource provider with UUID 828c5fec-9680-4b70-a7ce-11a1217a9c75 and name compute-0.ctlplane.example.com.
Oct 02 19:10:06 compute-0 nova_compute[194781]: 2025-10-02 19:10:06.864 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 02 19:10:06 compute-0 nova_compute[194781]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 02 19:10:06 compute-0 nova_compute[194781]: 2025-10-02 19:10:06.864 2 INFO nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] kernel doesn't support AMD SEV
Oct 02 19:10:06 compute-0 nova_compute[194781]: 2025-10-02 19:10:06.865 2 DEBUG nova.compute.provider_tree [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:10:06 compute-0 nova_compute[194781]: 2025-10-02 19:10:06.865 2 DEBUG nova.virt.libvirt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:10:06 compute-0 nova_compute[194781]: 2025-10-02 19:10:06.934 2 DEBUG nova.scheduler.client.report [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Updated inventory for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 02 19:10:06 compute-0 nova_compute[194781]: 2025-10-02 19:10:06.934 2 DEBUG nova.compute.provider_tree [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Updating resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 19:10:06 compute-0 nova_compute[194781]: 2025-10-02 19:10:06.935 2 DEBUG nova.compute.provider_tree [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:10:07 compute-0 nova_compute[194781]: 2025-10-02 19:10:07.029 2 DEBUG nova.compute.provider_tree [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Updating resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 19:10:07 compute-0 nova_compute[194781]: 2025-10-02 19:10:07.050 2 DEBUG nova.compute.resource_tracker [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:10:07 compute-0 nova_compute[194781]: 2025-10-02 19:10:07.051 2 DEBUG oslo_concurrency.lockutils [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:10:07 compute-0 nova_compute[194781]: 2025-10-02 19:10:07.051 2 DEBUG nova.service [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 02 19:10:07 compute-0 nova_compute[194781]: 2025-10-02 19:10:07.136 2 DEBUG nova.service [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 02 19:10:07 compute-0 nova_compute[194781]: 2025-10-02 19:10:07.136 2 DEBUG nova.servicegroup.drivers.db [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 02 19:10:08 compute-0 unix_chkpwd[195081]: password check failed for user (root)
Oct 02 19:10:08 compute-0 sshd-session[195079]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 02 19:10:09 compute-0 sshd-session[195082]: Accepted publickey for zuul from 192.168.122.30 port 57986 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 19:10:09 compute-0 systemd-logind[798]: New session 27 of user zuul.
Oct 02 19:10:09 compute-0 systemd[1]: Started Session 27 of User zuul.
Oct 02 19:10:09 compute-0 sshd-session[195082]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:10:10 compute-0 python3.9[195235]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:10:11 compute-0 sshd-session[195079]: Failed password for root from 193.46.255.103 port 36072 ssh2
Oct 02 19:10:11 compute-0 unix_chkpwd[195387]: password check failed for user (root)
Oct 02 19:10:11 compute-0 sudo[195390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixpcseueoseerbfovdveakzvxdgkqvke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432211.1071665-36-80523545489329/AnsiballZ_systemd_service.py'
Oct 02 19:10:11 compute-0 sudo[195390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:12 compute-0 python3.9[195392]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:10:12 compute-0 systemd[1]: Reloading.
Oct 02 19:10:12 compute-0 systemd-rc-local-generator[195417]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:10:12 compute-0 systemd-sysv-generator[195423]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:10:12 compute-0 sudo[195390]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:13 compute-0 sshd-session[195079]: Failed password for root from 193.46.255.103 port 36072 ssh2
Oct 02 19:10:13 compute-0 python3.9[195577]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:10:13 compute-0 network[195594]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:10:13 compute-0 network[195595]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:10:13 compute-0 network[195596]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:10:13 compute-0 unix_chkpwd[195601]: password check failed for user (root)
Oct 02 19:10:16 compute-0 sshd-session[195079]: Failed password for root from 193.46.255.103 port 36072 ssh2
Oct 02 19:10:16 compute-0 sshd-session[195079]: Received disconnect from 193.46.255.103 port 36072:11:  [preauth]
Oct 02 19:10:16 compute-0 sshd-session[195079]: Disconnected from authenticating user root 193.46.255.103 port 36072 [preauth]
Oct 02 19:10:16 compute-0 sshd-session[195079]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 02 19:10:17 compute-0 podman[195685]: 2025-10-02 19:10:17.199260016 +0000 UTC m=+0.081303395 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:10:17 compute-0 unix_chkpwd[195724]: password check failed for user (root)
Oct 02 19:10:17 compute-0 sshd-session[195667]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 02 19:10:18 compute-0 sudo[195895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nslhivabekondtftxdafazjqfoqqzgtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432218.2916975-55-166009619737653/AnsiballZ_systemd_service.py'
Oct 02 19:10:18 compute-0 sudo[195895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:18 compute-0 python3.9[195897]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:10:19 compute-0 sudo[195895]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:19 compute-0 sshd-session[195667]: Failed password for root from 193.46.255.103 port 53518 ssh2
Oct 02 19:10:19 compute-0 sudo[196048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awxcrzytqxbhjudxhlyppqwfbsqsunhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432219.299303-65-53870301155432/AnsiballZ_file.py'
Oct 02 19:10:19 compute-0 sudo[196048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:19 compute-0 python3.9[196050]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:19 compute-0 sudo[196048]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:19 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:10:19 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:10:20 compute-0 sudo[196201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwwdxulqiusumlkelihyleknbayxpbrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432220.168933-73-28892926376770/AnsiballZ_file.py'
Oct 02 19:10:20 compute-0 sudo[196201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:20 compute-0 unix_chkpwd[196204]: password check failed for user (root)
Oct 02 19:10:20 compute-0 python3.9[196203]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:20 compute-0 sudo[196201]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:20 compute-0 auditd[707]: Audit daemon rotating log files
Oct 02 19:10:21 compute-0 sudo[196354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgmisgcmyraabvrxdfroyrjqentgaoab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432220.944194-82-131536505685508/AnsiballZ_command.py'
Oct 02 19:10:21 compute-0 sudo[196354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:21 compute-0 podman[196356]: 2025-10-02 19:10:21.499626797 +0000 UTC m=+0.062526592 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3)
Oct 02 19:10:21 compute-0 python3.9[196357]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:10:21 compute-0 sudo[196354]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:22 compute-0 python3.9[196529]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:10:22 compute-0 sshd-session[195667]: Failed password for root from 193.46.255.103 port 53518 ssh2
Oct 02 19:10:23 compute-0 sudo[196679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtygcmyyvjwnooecwtfxohlqdddotcnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432222.9015923-100-164404052798622/AnsiballZ_systemd_service.py'
Oct 02 19:10:23 compute-0 sudo[196679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:23 compute-0 python3.9[196681]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:10:23 compute-0 systemd[1]: Reloading.
Oct 02 19:10:23 compute-0 systemd-rc-local-generator[196709]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:10:23 compute-0 systemd-sysv-generator[196713]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:10:23 compute-0 unix_chkpwd[196717]: password check failed for user (root)
Oct 02 19:10:23 compute-0 sudo[196679]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:24 compute-0 sudo[196867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raojhnhpuhwkrxqrzsuxedkfsgbsqzfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432223.9950325-108-175653041834258/AnsiballZ_command.py'
Oct 02 19:10:24 compute-0 sudo[196867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:24 compute-0 python3.9[196869]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:10:24 compute-0 sudo[196867]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:25 compute-0 sudo[197020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mksmhkdzbydymkcpovgszpiscdnmmouv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432224.842667-117-235357437808887/AnsiballZ_file.py'
Oct 02 19:10:25 compute-0 sudo[197020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:25 compute-0 sshd-session[195667]: Failed password for root from 193.46.255.103 port 53518 ssh2
Oct 02 19:10:25 compute-0 python3.9[197022]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:10:25 compute-0 sshd-session[195667]: Received disconnect from 193.46.255.103 port 53518:11:  [preauth]
Oct 02 19:10:25 compute-0 sshd-session[195667]: Disconnected from authenticating user root 193.46.255.103 port 53518 [preauth]
Oct 02 19:10:25 compute-0 sshd-session[195667]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 02 19:10:25 compute-0 sudo[197020]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:25 compute-0 podman[197023]: 2025-10-02 19:10:25.486976223 +0000 UTC m=+0.076026707 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 19:10:25 compute-0 podman[197024]: 2025-10-02 19:10:25.523155462 +0000 UTC m=+0.117943307 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:26 compute-0 python3.9[197218]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:10:26 compute-0 unix_chkpwd[197221]: password check failed for user (root)
Oct 02 19:10:26 compute-0 sshd-session[197089]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 02 19:10:27 compute-0 python3.9[197371]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:27 compute-0 python3.9[197492]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432226.473738-133-6431213968766/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:10:28 compute-0 sudo[197642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvjkazajnqjzcwkjswxgrrsriyqmmcjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432228.1185238-148-218353634311994/AnsiballZ_group.py'
Oct 02 19:10:28 compute-0 sudo[197642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:28 compute-0 sshd-session[197089]: Failed password for root from 193.46.255.103 port 12468 ssh2
Oct 02 19:10:28 compute-0 python3.9[197644]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Oct 02 19:10:28 compute-0 sudo[197642]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:29 compute-0 unix_chkpwd[197721]: password check failed for user (root)
Oct 02 19:10:29 compute-0 sudo[197795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzfgpxopbyxrsbsarkpbywgfjlrovpec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432229.254604-159-243156617029942/AnsiballZ_getent.py'
Oct 02 19:10:29 compute-0 sudo[197795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:29 compute-0 python3.9[197797]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Oct 02 19:10:29 compute-0 sudo[197795]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:30 compute-0 sudo[197948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fygchqeuqutqvvqaszinjrrdptdicbdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432230.1010027-167-149270733959961/AnsiballZ_group.py'
Oct 02 19:10:30 compute-0 sudo[197948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:30 compute-0 python3.9[197950]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 19:10:30 compute-0 groupadd[197951]: group added to /etc/group: name=ceilometer, GID=42405
Oct 02 19:10:30 compute-0 groupadd[197951]: group added to /etc/gshadow: name=ceilometer
Oct 02 19:10:30 compute-0 groupadd[197951]: new group: name=ceilometer, GID=42405
Oct 02 19:10:30 compute-0 sudo[197948]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:31 compute-0 sudo[198106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmmcbfbunfqvceetnjxxuqfndwboargb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432230.7960966-175-228962448856865/AnsiballZ_user.py'
Oct 02 19:10:31 compute-0 sudo[198106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:31 compute-0 python3.9[198108]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 19:10:31 compute-0 useradd[198110]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 19:10:31 compute-0 useradd[198110]: add 'ceilometer' to group 'libvirt'
Oct 02 19:10:31 compute-0 useradd[198110]: add 'ceilometer' to shadow group 'libvirt'
Oct 02 19:10:31 compute-0 sudo[198106]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:31 compute-0 sshd-session[197089]: Failed password for root from 193.46.255.103 port 12468 ssh2
Oct 02 19:10:32 compute-0 unix_chkpwd[198216]: password check failed for user (root)
Oct 02 19:10:32 compute-0 python3.9[198267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:33 compute-0 python3.9[198388]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759432232.3886852-201-265743409183884/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:33 compute-0 python3.9[198538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:34 compute-0 python3.9[198659]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759432233.5566235-201-26638300420315/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:34 compute-0 sshd-session[197089]: Failed password for root from 193.46.255.103 port 12468 ssh2
Oct 02 19:10:35 compute-0 python3.9[198809]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:35 compute-0 sshd-session[197089]: Received disconnect from 193.46.255.103 port 12468:11:  [preauth]
Oct 02 19:10:35 compute-0 sshd-session[197089]: Disconnected from authenticating user root 193.46.255.103 port 12468 [preauth]
Oct 02 19:10:35 compute-0 sshd-session[197089]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 02 19:10:35 compute-0 python3.9[198930]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759432234.6787755-201-175748018220107/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:36 compute-0 python3.9[199080]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:10:37 compute-0 python3.9[199232]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:10:37 compute-0 python3.9[199384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:38 compute-0 python3.9[199505]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432237.3258328-260-187948203669340/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:39 compute-0 python3.9[199655]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:39 compute-0 python3.9[199731]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:40 compute-0 python3.9[199881]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:40 compute-0 python3.9[200002]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432239.6408045-260-175264885332174/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:41 compute-0 python3.9[200152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:42 compute-0 python3.9[200273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432241.1033895-260-60943951478641/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:42 compute-0 python3.9[200423]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:43 compute-0 python3.9[200544]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432242.29157-260-234056918496738/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:44 compute-0 python3.9[200694]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:44 compute-0 nova_compute[194781]: 2025-10-02 19:10:44.138 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:10:44 compute-0 nova_compute[194781]: 2025-10-02 19:10:44.159 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:10:44 compute-0 python3.9[200815]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432243.5331707-260-102389277966630/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:45 compute-0 python3.9[200965]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:46 compute-0 python3.9[201086]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432244.9731414-260-239455387339923/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:46 compute-0 python3.9[201236]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:47 compute-0 python3.9[201357]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432246.3545702-260-142264291035054/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:10:47.443 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:10:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:10:47.443 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:10:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:10:47.443 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:10:47 compute-0 podman[201358]: 2025-10-02 19:10:47.46887029 +0000 UTC m=+0.069283209 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:10:47 compute-0 python3.9[201528]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:48 compute-0 python3.9[201649]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432247.5227141-260-265144282868228/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:49 compute-0 python3.9[201799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:49 compute-0 python3.9[201920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432248.8036542-260-234736237795431/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:50 compute-0 python3.9[202070]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:50 compute-0 python3.9[202191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432249.949656-260-49156334115263/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:51 compute-0 podman[202342]: 2025-10-02 19:10:51.664868663 +0000 UTC m=+0.048048703 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 19:10:51 compute-0 python3.9[202341]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:52 compute-0 python3.9[202437]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:52 compute-0 python3.9[202587]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:53 compute-0 python3.9[202663]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:54 compute-0 python3.9[202813]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:54 compute-0 python3.9[202889]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:55 compute-0 sudo[203039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uucdzzunxfzvcdthgvpaggrvmwnzuedz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432254.731928-449-253636597092049/AnsiballZ_file.py'
Oct 02 19:10:55 compute-0 sudo[203039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:55 compute-0 python3.9[203041]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:55 compute-0 sudo[203039]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:55 compute-0 podman[203135]: 2025-10-02 19:10:55.682771581 +0000 UTC m=+0.049775278 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:10:55 compute-0 podman[203142]: 2025-10-02 19:10:55.716154847 +0000 UTC m=+0.090953349 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 02 19:10:55 compute-0 sudo[203236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgjvmprqsddyfhreiohwqyzcqtswfiko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432255.4723134-457-55518072243257/AnsiballZ_file.py'
Oct 02 19:10:55 compute-0 sudo[203236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:55 compute-0 python3.9[203238]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:10:56 compute-0 sudo[203236]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:56 compute-0 sudo[203388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlfzpvztrgymumpbimnglmogazjsxzpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432256.1990495-465-193004274083389/AnsiballZ_file.py'
Oct 02 19:10:56 compute-0 sudo[203388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:56 compute-0 python3.9[203390]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:10:56 compute-0 sudo[203388]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:57 compute-0 sudo[203540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctpdqfgwcjkmbarvkwchlkuwtaemdcvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432256.9958572-473-133349298723123/AnsiballZ_systemd_service.py'
Oct 02 19:10:57 compute-0 sudo[203540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:57 compute-0 python3.9[203542]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:10:57 compute-0 systemd[1]: Reloading.
Oct 02 19:10:57 compute-0 systemd-rc-local-generator[203568]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:10:57 compute-0 systemd-sysv-generator[203572]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:10:58 compute-0 systemd[1]: Listening on Podman API Socket.
Oct 02 19:10:58 compute-0 sudo[203540]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:58 compute-0 sudo[203732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prmkgptjxxoqgaqujiospealheyowdkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432258.3796105-482-158742783490007/AnsiballZ_stat.py'
Oct 02 19:10:58 compute-0 sudo[203732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:58 compute-0 python3.9[203734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:10:58 compute-0 sudo[203732]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:59 compute-0 sudo[203855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvafmcyqvhqjukcygxuyttlzlyvazcpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432258.3796105-482-158742783490007/AnsiballZ_copy.py'
Oct 02 19:10:59 compute-0 sudo[203855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:10:59 compute-0 python3.9[203857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432258.3796105-482-158742783490007/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:10:59 compute-0 sudo[203855]: pam_unix(sudo:session): session closed for user root
Oct 02 19:10:59 compute-0 sudo[203931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neigykwkiudcozpxoopbmhljxqxilfgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432258.3796105-482-158742783490007/AnsiballZ_stat.py'
Oct 02 19:10:59 compute-0 sudo[203931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:00 compute-0 python3.9[203933]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:11:00 compute-0 sudo[203931]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:00 compute-0 sudo[204054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glznptkbuwmtmaypuqdbuzsxbwdcyfkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432258.3796105-482-158742783490007/AnsiballZ_copy.py'
Oct 02 19:11:00 compute-0 sudo[204054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:00 compute-0 python3.9[204056]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432258.3796105-482-158742783490007/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:11:00 compute-0 sudo[204054]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:01 compute-0 sudo[204206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpuwfpyjwhgcmrrwaeebzcluhslskyap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432260.973996-510-192856092548056/AnsiballZ_container_config_data.py'
Oct 02 19:11:01 compute-0 sudo[204206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:01 compute-0 python3.9[204208]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Oct 02 19:11:01 compute-0 sudo[204206]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:02 compute-0 sudo[204358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kohselhnmnboxxfmcoclycfrpdhdjija ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432261.9277396-519-263175342540915/AnsiballZ_container_config_hash.py'
Oct 02 19:11:02 compute-0 sudo[204358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:02 compute-0 python3.9[204360]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:11:02 compute-0 sudo[204358]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:03 compute-0 sudo[204510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twsomkxpnaacxtqenijecjojtgkjtktz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432263.029863-529-172128659142224/AnsiballZ_edpm_container_manage.py'
Oct 02 19:11:03 compute-0 sudo[204510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:03 compute-0 python3[204512]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:11:04 compute-0 podman[204550]: 2025-10-02 19:11:04.033249039 +0000 UTC m=+0.046591456 container create 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:11:04 compute-0 podman[204550]: 2025-10-02 19:11:04.007039908 +0000 UTC m=+0.020382345 image pull af55c482fa6ac3c7068a40d60290d5ada8b2ec948be38389742c3fe61801742f quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:11:04 compute-0 python3[204512]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Oct 02 19:11:04 compute-0 sudo[204510]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.200 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.200 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.201 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.202 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.202 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.203 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.203 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.204 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.204 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.366 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.367 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.367 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.367 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.519 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.520 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6019MB free_disk=72.73413467407227GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.520 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.520 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.584 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.585 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.605 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.623 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.626 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:11:04 compute-0 nova_compute[194781]: 2025-10-02 19:11:04.627 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:11:04 compute-0 sudo[204738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urtxktxbavbzexmoabskmxvuivqbdsga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432264.5058155-537-44124417196662/AnsiballZ_stat.py'
Oct 02 19:11:04 compute-0 sudo[204738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:04 compute-0 python3.9[204740]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:11:05 compute-0 sudo[204738]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:05 compute-0 sudo[204892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdbstijmbxbggslicwdkrvycnncmanpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432265.433323-546-5202328848481/AnsiballZ_file.py'
Oct 02 19:11:05 compute-0 sudo[204892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:06 compute-0 python3.9[204894]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:11:06 compute-0 sudo[204892]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:06 compute-0 sudo[205043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhwgznfscezezylyirlghxbcesvbbcgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432266.0875962-546-240545232098883/AnsiballZ_copy.py'
Oct 02 19:11:06 compute-0 sudo[205043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:06 compute-0 python3.9[205045]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432266.0875962-546-240545232098883/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:11:06 compute-0 sudo[205043]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:07 compute-0 sudo[205119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjrjqivpdwaukormglgavtnhcjsvqewa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432266.0875962-546-240545232098883/AnsiballZ_systemd.py'
Oct 02 19:11:07 compute-0 sudo[205119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:07 compute-0 python3.9[205121]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:11:07 compute-0 systemd[1]: Reloading.
Oct 02 19:11:07 compute-0 systemd-rc-local-generator[205149]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:11:07 compute-0 systemd-sysv-generator[205152]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:11:08 compute-0 sudo[205119]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:08 compute-0 sudo[205230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inxepqgkqwohuufaoduftatvhwfkpzmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432266.0875962-546-240545232098883/AnsiballZ_systemd.py'
Oct 02 19:11:08 compute-0 sudo[205230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:08 compute-0 python3.9[205232]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:11:09 compute-0 systemd[1]: Reloading.
Oct 02 19:11:09 compute-0 systemd-rc-local-generator[205262]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:11:09 compute-0 systemd-sysv-generator[205265]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:11:10 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Oct 02 19:11:10 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccdc0da7e94d6a4f546b7247e2f69ba381dea859e325481c84abf4dbb025e76d/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccdc0da7e94d6a4f546b7247e2f69ba381dea859e325481c84abf4dbb025e76d/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccdc0da7e94d6a4f546b7247e2f69ba381dea859e325481c84abf4dbb025e76d/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccdc0da7e94d6a4f546b7247e2f69ba381dea859e325481c84abf4dbb025e76d/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:10 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b.
Oct 02 19:11:10 compute-0 podman[205272]: 2025-10-02 19:11:10.321797864 +0000 UTC m=+0.120833669 container init 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: + sudo -E kolla_set_configs
Oct 02 19:11:10 compute-0 sudo[205293]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: sudo: unable to send audit message: Operation not permitted
Oct 02 19:11:10 compute-0 sudo[205293]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:11:10 compute-0 podman[205272]: 2025-10-02 19:11:10.356187693 +0000 UTC m=+0.155223498 container start 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Oct 02 19:11:10 compute-0 podman[205272]: ceilometer_agent_compute
Oct 02 19:11:10 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Oct 02 19:11:10 compute-0 sudo[205230]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Validating config file
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Copying service configuration files
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: INFO:__main__:Writing out command to execute
Oct 02 19:11:10 compute-0 sudo[205293]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: ++ cat /run_command
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: + ARGS=
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: + sudo kolla_copy_cacerts
Oct 02 19:11:10 compute-0 sudo[205318]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: sudo: unable to send audit message: Operation not permitted
Oct 02 19:11:10 compute-0 sudo[205318]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:11:10 compute-0 sudo[205318]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: + [[ ! -n '' ]]
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: + . kolla_extend_start
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: + umask 0022
Oct 02 19:11:10 compute-0 ceilometer_agent_compute[205287]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Oct 02 19:11:10 compute-0 podman[205294]: 2025-10-02 19:11:10.469821649 +0000 UTC m=+0.099983792 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4)
Oct 02 19:11:10 compute-0 systemd[1]: 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b-39e62d9c8f02c53f.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:11:10 compute-0 systemd[1]: 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b-39e62d9c8f02c53f.service: Failed with result 'exit-code'.
Oct 02 19:11:10 compute-0 sudo[205468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enjkzozjstlgvvsyftvoyvdpcepeaiye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432270.5685534-570-158090570502436/AnsiballZ_systemd.py'
Oct 02 19:11:10 compute-0 sudo[205468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:11 compute-0 python3.9[205470]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:11:11 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Oct 02 19:11:11 compute-0 systemd[1]: libpod-29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b.scope: Deactivated successfully.
Oct 02 19:11:11 compute-0 conmon[205287]: conmon 29adca77c9edd88782ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b.scope/container/memory.events
Oct 02 19:11:11 compute-0 podman[205474]: 2025-10-02 19:11:11.2917002 +0000 UTC m=+0.055810823 container stop 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:11:11 compute-0 podman[205474]: 2025-10-02 19:11:11.292336166 +0000 UTC m=+0.056446809 container died 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:11:11 compute-0 systemd[1]: 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b-39e62d9c8f02c53f.timer: Deactivated successfully.
Oct 02 19:11:11 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b.
Oct 02 19:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b-userdata-shm.mount: Deactivated successfully.
Oct 02 19:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccdc0da7e94d6a4f546b7247e2f69ba381dea859e325481c84abf4dbb025e76d-merged.mount: Deactivated successfully.
Oct 02 19:11:11 compute-0 podman[205474]: 2025-10-02 19:11:11.338080679 +0000 UTC m=+0.102191302 container cleanup 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:11:11 compute-0 podman[205474]: ceilometer_agent_compute
Oct 02 19:11:11 compute-0 podman[205501]: ceilometer_agent_compute
Oct 02 19:11:11 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Oct 02 19:11:11 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Oct 02 19:11:11 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Oct 02 19:11:11 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccdc0da7e94d6a4f546b7247e2f69ba381dea859e325481c84abf4dbb025e76d/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccdc0da7e94d6a4f546b7247e2f69ba381dea859e325481c84abf4dbb025e76d/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccdc0da7e94d6a4f546b7247e2f69ba381dea859e325481c84abf4dbb025e76d/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccdc0da7e94d6a4f546b7247e2f69ba381dea859e325481c84abf4dbb025e76d/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:11 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b.
Oct 02 19:11:11 compute-0 podman[205514]: 2025-10-02 19:11:11.581143582 +0000 UTC m=+0.141512941 container init 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm)
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: + sudo -E kolla_set_configs
Oct 02 19:11:11 compute-0 podman[205514]: 2025-10-02 19:11:11.614780021 +0000 UTC m=+0.175149430 container start 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Oct 02 19:11:11 compute-0 sudo[205535]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: sudo: unable to send audit message: Operation not permitted
Oct 02 19:11:11 compute-0 sudo[205535]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:11:11 compute-0 podman[205514]: ceilometer_agent_compute
Oct 02 19:11:11 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Oct 02 19:11:11 compute-0 sudo[205468]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Validating config file
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Copying service configuration files
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: INFO:__main__:Writing out command to execute
Oct 02 19:11:11 compute-0 sudo[205535]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: ++ cat /run_command
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: + ARGS=
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: + sudo kolla_copy_cacerts
Oct 02 19:11:11 compute-0 podman[205536]: 2025-10-02 19:11:11.713997622 +0000 UTC m=+0.079693440 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:11:11 compute-0 sudo[205560]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: sudo: unable to send audit message: Operation not permitted
Oct 02 19:11:11 compute-0 sudo[205560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:11:11 compute-0 systemd[1]: 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b-4edef62d157507f2.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:11:11 compute-0 systemd[1]: 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b-4edef62d157507f2.service: Failed with result 'exit-code'.
Oct 02 19:11:11 compute-0 sudo[205560]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: + [[ ! -n '' ]]
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: + . kolla_extend_start
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: + umask 0022
Oct 02 19:11:11 compute-0 ceilometer_agent_compute[205529]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Oct 02 19:11:12 compute-0 sudo[205709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owswgebgpqkemuzkejxxqmycjdzaxeqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432271.8674152-578-235398228528718/AnsiballZ_stat.py'
Oct 02 19:11:12 compute-0 sudo[205709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:12 compute-0 python3.9[205711]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:11:12 compute-0 sudo[205709]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.495 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.495 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.495 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.495 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.495 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.495 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.495 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.495 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.495 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.496 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.496 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.496 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.496 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.496 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.496 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.496 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.496 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.496 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.496 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.496 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.496 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.497 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.497 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.497 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.497 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.497 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.497 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.497 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.497 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.497 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.497 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.497 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.498 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.498 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.498 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.498 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.498 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.498 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.498 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.498 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.498 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.498 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.498 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.498 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.498 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.498 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.499 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.499 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.499 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.499 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.499 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.499 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.499 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.499 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.499 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.499 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.500 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.500 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.500 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.500 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.500 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.500 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.500 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.500 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.500 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.500 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.500 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.500 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.501 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.501 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.501 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.501 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.501 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.501 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.501 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.501 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.501 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.501 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.501 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.501 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.502 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.502 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.502 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.502 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.502 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.502 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.502 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.502 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.502 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.502 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.503 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.503 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.503 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.503 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.503 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.503 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.503 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.503 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.503 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.503 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.503 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.503 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.504 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.504 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.504 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.504 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.504 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.504 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.504 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.504 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.504 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.504 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.504 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.505 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.505 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.505 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.505 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.505 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.505 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.505 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.505 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.505 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.505 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.506 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.506 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.506 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.506 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.506 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.506 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.506 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.506 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.506 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.506 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.506 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.507 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.507 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.507 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.507 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.507 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.507 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.507 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.507 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.507 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.507 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.507 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.507 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.507 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.508 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.508 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.508 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.508 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.508 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.508 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.530 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.530 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.530 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.530 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.531 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.531 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.531 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.531 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.531 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.531 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.531 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.532 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.532 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.532 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.532 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.532 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.532 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.532 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.532 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.532 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.533 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.533 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.533 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.533 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.533 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.533 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.533 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.533 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.533 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.533 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.533 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.534 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.534 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.534 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.534 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.534 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.534 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.534 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.534 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.534 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.534 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.534 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.535 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.535 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.535 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.535 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.535 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.535 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.535 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.535 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.535 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.535 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.535 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.536 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.536 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.536 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.536 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.536 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.536 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.536 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.536 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.536 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.536 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.537 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.537 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.537 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.537 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.537 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.537 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.537 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.537 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.537 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.537 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.538 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.538 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.538 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.538 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.538 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.538 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.538 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.538 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.538 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.539 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.539 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.539 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.539 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.539 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.539 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.539 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.539 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.539 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.540 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.540 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.540 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.540 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.540 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.540 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.540 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.540 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.540 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.540 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.541 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.541 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.541 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.541 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.541 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.541 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.541 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.541 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.541 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.541 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.542 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.542 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.542 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.542 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.542 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.542 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.542 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.542 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.542 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.542 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.543 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.543 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.543 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.543 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.543 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.543 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.543 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.543 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.543 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.543 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.544 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.544 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.544 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.544 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.544 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.544 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.544 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.544 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.544 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.544 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.544 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.544 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.545 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.545 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.545 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.545 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.545 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.545 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.545 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.545 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.545 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.545 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.545 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.547 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.549 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.549 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.762 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.771 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.771 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.771 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct 02 19:11:12 compute-0 sudo[205840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moarldysjkvtnrjvqsebkuvkhuauvwgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432271.8674152-578-235398228528718/AnsiballZ_copy.py'
Oct 02 19:11:12 compute-0 sudo[205840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.891 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.891 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.891 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.891 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.892 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.892 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.892 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.892 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.892 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.893 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.893 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.893 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.893 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.894 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.894 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.894 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.894 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.894 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.895 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.895 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.895 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.895 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.896 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.896 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.896 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.896 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.896 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.896 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.897 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.897 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.897 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.897 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.897 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.898 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.898 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.898 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.898 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.898 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.898 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.899 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.899 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.899 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.899 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.899 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.900 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.900 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.900 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.900 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.900 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.900 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.901 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.901 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.901 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.901 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.901 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.901 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.902 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.902 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.902 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.902 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.902 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.903 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.903 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.903 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.903 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.903 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.903 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.904 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.904 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.904 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.904 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.905 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.905 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.905 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.905 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.905 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.905 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.905 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.906 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.906 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.906 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.906 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.906 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.907 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.907 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.907 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.907 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.908 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.908 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.908 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.908 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.908 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.908 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.909 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.909 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.909 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.909 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.909 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.909 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.910 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.910 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.910 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.910 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.910 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.910 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.911 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.911 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.911 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.911 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.911 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.912 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.912 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.912 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.912 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.912 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.913 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.913 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.913 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.913 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.913 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.913 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.914 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.914 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.914 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.914 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.914 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.914 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.915 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.915 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.915 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.915 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.915 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.915 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.915 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.916 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.916 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.916 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.916 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.916 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.916 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.916 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.917 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.917 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.917 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.917 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.917 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.917 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.918 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.918 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.918 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.918 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.918 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.918 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.919 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.919 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.919 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.919 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.919 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.919 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.920 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.920 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.920 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.920 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.920 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.921 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.921 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.921 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.921 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.921 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.923 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.935 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.935 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.936 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.936 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.936 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.937 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.937 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.937 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.942 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.943 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.943 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.944 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.944 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.944 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.944 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.944 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.944 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.945 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.945 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.945 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.946 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.946 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.946 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.946 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.946 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.946 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.947 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.947 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.947 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.947 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.947 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.948 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.948 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.948 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.948 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.949 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.949 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.949 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.949 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.949 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.950 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.950 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.950 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.950 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.950 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.950 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.951 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.951 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.951 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.951 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.951 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.951 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.951 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.952 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.952 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.952 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.952 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.952 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.952 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.952 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.952 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.953 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.953 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.953 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.953 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:11:12.953 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:11:13 compute-0 python3.9[205842]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432271.8674152-578-235398228528718/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:11:13 compute-0 sudo[205840]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:13 compute-0 sudo[205997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzqlhmxghqkfirngngwxrnwbhlkmjgxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432273.4500985-595-105233063683599/AnsiballZ_container_config_data.py'
Oct 02 19:11:13 compute-0 sudo[205997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:14 compute-0 python3.9[205999]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Oct 02 19:11:14 compute-0 sudo[205997]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:14 compute-0 sudo[206149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypyzulyxfdhqxxmjromkhhuspwrzqxfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432274.336578-604-182120672706421/AnsiballZ_container_config_hash.py'
Oct 02 19:11:14 compute-0 sudo[206149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:14 compute-0 python3.9[206151]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:11:14 compute-0 sudo[206149]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:15 compute-0 sudo[206301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwpmrcrecifdrhxwackmzxlsfhpseiai ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432275.2084398-614-158702522063250/AnsiballZ_edpm_container_manage.py'
Oct 02 19:11:15 compute-0 sudo[206301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:15 compute-0 python3[206303]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:11:16 compute-0 podman[206336]: 2025-10-02 19:11:16.038899481 +0000 UTC m=+0.055052982 container create 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible)
Oct 02 19:11:16 compute-0 podman[206336]: 2025-10-02 19:11:16.007987665 +0000 UTC m=+0.024141196 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Oct 02 19:11:16 compute-0 python3[206303]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Oct 02 19:11:16 compute-0 sudo[206301]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:16 compute-0 sudo[206524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-factzyjtjzpdwgjtwoodcyuwfrmtzyvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432276.4538512-622-170024383977867/AnsiballZ_stat.py'
Oct 02 19:11:16 compute-0 sudo[206524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:17 compute-0 python3.9[206526]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:11:17 compute-0 sudo[206524]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:17 compute-0 podman[206652]: 2025-10-02 19:11:17.69270914 +0000 UTC m=+0.072412366 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:11:17 compute-0 sudo[206696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blryabtsszjytqpmwuvtijegrunuaxwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432277.352463-631-67966102098569/AnsiballZ_file.py'
Oct 02 19:11:17 compute-0 sudo[206696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:17 compute-0 python3.9[206700]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:11:17 compute-0 sudo[206696]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:18 compute-0 sudo[206849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jorgtehtyzjthcsacoqfohzdsyyxrlkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432278.0275548-631-110504281819583/AnsiballZ_copy.py'
Oct 02 19:11:18 compute-0 sudo[206849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:18 compute-0 python3.9[206851]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432278.0275548-631-110504281819583/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:11:18 compute-0 sudo[206849]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:19 compute-0 sudo[206925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vscuvlrsgsryikixoyqdbpuikvyarikm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432278.0275548-631-110504281819583/AnsiballZ_systemd.py'
Oct 02 19:11:19 compute-0 sudo[206925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:19 compute-0 python3.9[206927]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:11:19 compute-0 systemd[1]: Reloading.
Oct 02 19:11:19 compute-0 systemd-rc-local-generator[206956]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:11:19 compute-0 systemd-sysv-generator[206960]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:11:19 compute-0 sudo[206925]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:20 compute-0 sudo[207036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwswfplpmgquiylwlduxnxspqdxffgfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432278.0275548-631-110504281819583/AnsiballZ_systemd.py'
Oct 02 19:11:20 compute-0 sudo[207036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:20 compute-0 python3.9[207038]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:11:20 compute-0 systemd[1]: Reloading.
Oct 02 19:11:20 compute-0 systemd-rc-local-generator[207065]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:11:20 compute-0 systemd-sysv-generator[207071]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:11:20 compute-0 systemd[1]: Starting node_exporter container...
Oct 02 19:11:21 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0577ce33205b6751a6587d010004f72740142b6c572861600c6daa357c01273c/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0577ce33205b6751a6587d010004f72740142b6c572861600c6daa357c01273c/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:21 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4.
Oct 02 19:11:21 compute-0 podman[207078]: 2025-10-02 19:11:21.070905502 +0000 UTC m=+0.128205297 container init 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.084Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.084Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.084Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.085Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.085Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.085Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.085Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=arp
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=bcache
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=bonding
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=cpu
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=edac
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=filefd
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=netclass
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=netdev
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=netstat
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=nfs
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=nvme
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=softnet
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=systemd
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=xfs
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.086Z caller=node_exporter.go:117 level=info collector=zfs
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.087Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Oct 02 19:11:21 compute-0 node_exporter[207094]: ts=2025-10-02T19:11:21.088Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Oct 02 19:11:21 compute-0 podman[207078]: 2025-10-02 19:11:21.093414804 +0000 UTC m=+0.150714579 container start 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:11:21 compute-0 podman[207078]: node_exporter
Oct 02 19:11:21 compute-0 systemd[1]: Started node_exporter container.
Oct 02 19:11:21 compute-0 sudo[207036]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:21 compute-0 podman[207103]: 2025-10-02 19:11:21.163041744 +0000 UTC m=+0.055428052 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:11:21 compute-0 sudo[207276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufxghyybhwvboutqibtlbpclwizwbjqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432281.2970665-655-22047229977874/AnsiballZ_systemd.py'
Oct 02 19:11:21 compute-0 sudo[207276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:21 compute-0 python3.9[207278]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:11:22 compute-0 systemd[1]: Stopping node_exporter container...
Oct 02 19:11:22 compute-0 podman[207280]: 2025-10-02 19:11:22.042100592 +0000 UTC m=+0.066016505 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 02 19:11:22 compute-0 systemd[1]: libpod-61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4.scope: Deactivated successfully.
Oct 02 19:11:22 compute-0 podman[207295]: 2025-10-02 19:11:22.074691573 +0000 UTC m=+0.057163688 container died 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:11:22 compute-0 systemd[1]: 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4-511826e0d6dd2569.timer: Deactivated successfully.
Oct 02 19:11:22 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4.
Oct 02 19:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4-userdata-shm.mount: Deactivated successfully.
Oct 02 19:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0577ce33205b6751a6587d010004f72740142b6c572861600c6daa357c01273c-merged.mount: Deactivated successfully.
Oct 02 19:11:22 compute-0 podman[207295]: 2025-10-02 19:11:22.17565226 +0000 UTC m=+0.158124385 container cleanup 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:11:22 compute-0 podman[207295]: node_exporter
Oct 02 19:11:22 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 02 19:11:22 compute-0 podman[207328]: node_exporter
Oct 02 19:11:22 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Oct 02 19:11:22 compute-0 systemd[1]: Stopped node_exporter container.
Oct 02 19:11:22 compute-0 systemd[1]: Starting node_exporter container...
Oct 02 19:11:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0577ce33205b6751a6587d010004f72740142b6c572861600c6daa357c01273c/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0577ce33205b6751a6587d010004f72740142b6c572861600c6daa357c01273c/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:22 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4.
Oct 02 19:11:22 compute-0 podman[207341]: 2025-10-02 19:11:22.458368614 +0000 UTC m=+0.193240645 container init 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.482Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.483Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.484Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.484Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.484Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.485Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.485Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.485Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.485Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=arp
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=bcache
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=bonding
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=cpu
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=edac
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=filefd
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=netclass
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=netdev
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=netstat
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=nfs
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=nvme
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=softnet
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=systemd
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=xfs
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.486Z caller=node_exporter.go:117 level=info collector=zfs
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.487Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Oct 02 19:11:22 compute-0 node_exporter[207356]: ts=2025-10-02T19:11:22.488Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Oct 02 19:11:22 compute-0 podman[207341]: 2025-10-02 19:11:22.49564841 +0000 UTC m=+0.230520421 container start 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:11:22 compute-0 podman[207341]: node_exporter
Oct 02 19:11:22 compute-0 systemd[1]: Started node_exporter container.
Oct 02 19:11:22 compute-0 sudo[207276]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:22 compute-0 podman[207366]: 2025-10-02 19:11:22.593888785 +0000 UTC m=+0.080293337 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:11:23 compute-0 sudo[207540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcvituwgazikrnlomdqgwcjhjqwwynby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432282.7667074-663-68730477693933/AnsiballZ_stat.py'
Oct 02 19:11:23 compute-0 sudo[207540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:23 compute-0 python3.9[207542]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:11:23 compute-0 sudo[207540]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:23 compute-0 sudo[207663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxvefgsfnjsnxawqibxkirunqmstqkcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432282.7667074-663-68730477693933/AnsiballZ_copy.py'
Oct 02 19:11:23 compute-0 sudo[207663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:23 compute-0 python3.9[207665]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432282.7667074-663-68730477693933/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:11:23 compute-0 sudo[207663]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:24 compute-0 sudo[207815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phnztstfhqsybkcxyhfxogqdjhndslbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432284.1977186-680-99709670852456/AnsiballZ_container_config_data.py'
Oct 02 19:11:24 compute-0 sudo[207815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:24 compute-0 python3.9[207817]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Oct 02 19:11:24 compute-0 sudo[207815]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:25 compute-0 sudo[207967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iokhrztswwtccmovywyammrltjcszhof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432284.9673567-689-45776422885595/AnsiballZ_container_config_hash.py'
Oct 02 19:11:25 compute-0 sudo[207967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:25 compute-0 python3.9[207969]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:11:25 compute-0 sudo[207967]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:26 compute-0 podman[208093]: 2025-10-02 19:11:26.306647567 +0000 UTC m=+0.063383344 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 19:11:26 compute-0 sudo[208149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwigwecfgxlypaocbpunmcfjguobxnoa ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432285.9476023-699-26047631962235/AnsiballZ_edpm_container_manage.py'
Oct 02 19:11:26 compute-0 sudo[208149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:26 compute-0 podman[208094]: 2025-10-02 19:11:26.39320854 +0000 UTC m=+0.134394042 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:11:26 compute-0 python3[208157]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:11:28 compute-0 podman[208175]: 2025-10-02 19:11:28.428908843 +0000 UTC m=+1.769643475 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct 02 19:11:28 compute-0 podman[208271]: 2025-10-02 19:11:28.612627782 +0000 UTC m=+0.083788840 container create 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:11:28 compute-0 podman[208271]: 2025-10-02 19:11:28.55942063 +0000 UTC m=+0.030581718 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct 02 19:11:28 compute-0 python3[208157]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Oct 02 19:11:28 compute-0 sudo[208149]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:29 compute-0 sudo[208454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uosnrzfkafffrncfjodckpslviknqwak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432288.958154-707-119087948624241/AnsiballZ_stat.py'
Oct 02 19:11:29 compute-0 sudo[208454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:29 compute-0 python3.9[208456]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:11:29 compute-0 sudo[208454]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:30 compute-0 sudo[208608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veujtxtgqkragfbqupqxtntfzyseqhdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432289.7552001-716-73338175650524/AnsiballZ_file.py'
Oct 02 19:11:30 compute-0 sudo[208608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:30 compute-0 python3.9[208610]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:11:30 compute-0 sudo[208608]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:30 compute-0 sudo[208759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csutirsvygnpkmjzhpzqdocrekdkbpzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432290.3926246-716-60191716264952/AnsiballZ_copy.py'
Oct 02 19:11:30 compute-0 sudo[208759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:30 compute-0 python3.9[208761]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432290.3926246-716-60191716264952/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:11:31 compute-0 sudo[208759]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:31 compute-0 sudo[208835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlwsysjbtuygaysmgqqnzzcecjdvhthq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432290.3926246-716-60191716264952/AnsiballZ_systemd.py'
Oct 02 19:11:31 compute-0 sudo[208835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:31 compute-0 python3.9[208837]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:11:31 compute-0 systemd[1]: Reloading.
Oct 02 19:11:31 compute-0 systemd-rc-local-generator[208864]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:11:31 compute-0 systemd-sysv-generator[208867]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:11:31 compute-0 sudo[208835]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:32 compute-0 sudo[208945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvvxhvutcwbebkftfrpoqipowgyzwfwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432290.3926246-716-60191716264952/AnsiballZ_systemd.py'
Oct 02 19:11:32 compute-0 sudo[208945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:32 compute-0 python3.9[208947]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:11:32 compute-0 systemd[1]: Reloading.
Oct 02 19:11:32 compute-0 systemd-rc-local-generator[208978]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:11:32 compute-0 systemd-sysv-generator[208981]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:11:32 compute-0 systemd[1]: Starting podman_exporter container...
Oct 02 19:11:33 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeb0f3775124e7bb2a80467fff0ec7a9fce1cefa500c0d8f41b9db65107e5f2/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeb0f3775124e7bb2a80467fff0ec7a9fce1cefa500c0d8f41b9db65107e5f2/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:33 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62.
Oct 02 19:11:33 compute-0 podman[208988]: 2025-10-02 19:11:33.111705024 +0000 UTC m=+0.136834997 container init 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:11:33 compute-0 podman_exporter[209004]: ts=2025-10-02T19:11:33.133Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct 02 19:11:33 compute-0 podman_exporter[209004]: ts=2025-10-02T19:11:33.134Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct 02 19:11:33 compute-0 podman_exporter[209004]: ts=2025-10-02T19:11:33.134Z caller=handler.go:94 level=info msg="enabled collectors"
Oct 02 19:11:33 compute-0 podman_exporter[209004]: ts=2025-10-02T19:11:33.134Z caller=handler.go:105 level=info collector=container
Oct 02 19:11:33 compute-0 podman[208988]: 2025-10-02 19:11:33.147803048 +0000 UTC m=+0.172933021 container start 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:11:33 compute-0 podman[208988]: podman_exporter
Oct 02 19:11:33 compute-0 systemd[1]: Starting Podman API Service...
Oct 02 19:11:33 compute-0 systemd[1]: Started Podman API Service.
Oct 02 19:11:33 compute-0 systemd[1]: Started podman_exporter container.
Oct 02 19:11:33 compute-0 podman[209015]: time="2025-10-02T19:11:33Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct 02 19:11:33 compute-0 podman[209015]: time="2025-10-02T19:11:33Z" level=info msg="Setting parallel job count to 25"
Oct 02 19:11:33 compute-0 podman[209015]: time="2025-10-02T19:11:33Z" level=info msg="Using sqlite as database backend"
Oct 02 19:11:33 compute-0 sudo[208945]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:33 compute-0 podman[209015]: time="2025-10-02T19:11:33Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 02 19:11:33 compute-0 podman[209015]: time="2025-10-02T19:11:33Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct 02 19:11:33 compute-0 podman[209015]: time="2025-10-02T19:11:33Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct 02 19:11:33 compute-0 podman[209015]: @ - - [02/Oct/2025:19:11:33 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct 02 19:11:33 compute-0 podman[209015]: time="2025-10-02T19:11:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:11:33 compute-0 podman[209014]: 2025-10-02 19:11:33.234804243 +0000 UTC m=+0.067409572 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:11:33 compute-0 systemd[1]: 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62-2a41e6687a6cbe9.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:11:33 compute-0 systemd[1]: 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62-2a41e6687a6cbe9.service: Failed with result 'exit-code'.
Oct 02 19:11:33 compute-0 podman[209015]: @ - - [02/Oct/2025:19:11:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 22079 "" "Go-http-client/1.1"
Oct 02 19:11:33 compute-0 podman_exporter[209004]: ts=2025-10-02T19:11:33.267Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct 02 19:11:33 compute-0 podman_exporter[209004]: ts=2025-10-02T19:11:33.268Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct 02 19:11:33 compute-0 podman_exporter[209004]: ts=2025-10-02T19:11:33.269Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct 02 19:11:33 compute-0 sudo[209201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyidyiyasczikjdgocwspjebijakzuov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432293.3839624-740-134667385520412/AnsiballZ_systemd.py'
Oct 02 19:11:33 compute-0 sudo[209201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:33 compute-0 python3.9[209203]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:11:33 compute-0 systemd[1]: Stopping podman_exporter container...
Oct 02 19:11:34 compute-0 podman[209015]: @ - - [02/Oct/2025:19:11:33 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 0 "" "Go-http-client/1.1"
Oct 02 19:11:34 compute-0 systemd[1]: libpod-723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62.scope: Deactivated successfully.
Oct 02 19:11:34 compute-0 podman[209207]: 2025-10-02 19:11:34.051166476 +0000 UTC m=+0.070673620 container died 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:11:34 compute-0 systemd[1]: 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62-2a41e6687a6cbe9.timer: Deactivated successfully.
Oct 02 19:11:34 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62.
Oct 02 19:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62-userdata-shm.mount: Deactivated successfully.
Oct 02 19:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eeb0f3775124e7bb2a80467fff0ec7a9fce1cefa500c0d8f41b9db65107e5f2-merged.mount: Deactivated successfully.
Oct 02 19:11:34 compute-0 podman[209207]: 2025-10-02 19:11:34.251967521 +0000 UTC m=+0.271474665 container cleanup 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:11:34 compute-0 podman[209207]: podman_exporter
Oct 02 19:11:34 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 02 19:11:34 compute-0 podman[209234]: podman_exporter
Oct 02 19:11:34 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Oct 02 19:11:34 compute-0 systemd[1]: Stopped podman_exporter container.
Oct 02 19:11:34 compute-0 systemd[1]: Starting podman_exporter container...
Oct 02 19:11:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeb0f3775124e7bb2a80467fff0ec7a9fce1cefa500c0d8f41b9db65107e5f2/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeb0f3775124e7bb2a80467fff0ec7a9fce1cefa500c0d8f41b9db65107e5f2/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:34 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62.
Oct 02 19:11:34 compute-0 podman[209248]: 2025-10-02 19:11:34.459685211 +0000 UTC m=+0.111979203 container init 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:11:34 compute-0 podman_exporter[209263]: ts=2025-10-02T19:11:34.476Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct 02 19:11:34 compute-0 podman_exporter[209263]: ts=2025-10-02T19:11:34.476Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct 02 19:11:34 compute-0 podman_exporter[209263]: ts=2025-10-02T19:11:34.476Z caller=handler.go:94 level=info msg="enabled collectors"
Oct 02 19:11:34 compute-0 podman_exporter[209263]: ts=2025-10-02T19:11:34.476Z caller=handler.go:105 level=info collector=container
Oct 02 19:11:34 compute-0 podman[209015]: @ - - [02/Oct/2025:19:11:34 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct 02 19:11:34 compute-0 podman[209015]: time="2025-10-02T19:11:34Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:11:34 compute-0 podman[209248]: 2025-10-02 19:11:34.487203356 +0000 UTC m=+0.139497348 container start 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:11:34 compute-0 podman[209248]: podman_exporter
Oct 02 19:11:34 compute-0 systemd[1]: Started podman_exporter container.
Oct 02 19:11:34 compute-0 podman[209015]: @ - - [02/Oct/2025:19:11:34 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 22081 "" "Go-http-client/1.1"
Oct 02 19:11:34 compute-0 podman_exporter[209263]: ts=2025-10-02T19:11:34.504Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct 02 19:11:34 compute-0 podman_exporter[209263]: ts=2025-10-02T19:11:34.504Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct 02 19:11:34 compute-0 podman_exporter[209263]: ts=2025-10-02T19:11:34.504Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct 02 19:11:34 compute-0 sudo[209201]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:34 compute-0 podman[209273]: 2025-10-02 19:11:34.549338396 +0000 UTC m=+0.051501257 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:11:35 compute-0 sudo[209447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zptseuxrrcktqrecpqruytkkasxnayql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432294.7219622-748-111087768439777/AnsiballZ_stat.py'
Oct 02 19:11:35 compute-0 sudo[209447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:35 compute-0 python3.9[209449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:11:35 compute-0 sudo[209447]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:35 compute-0 sudo[209570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynddmklhwqeicskvkebygibvlfwdsnjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432294.7219622-748-111087768439777/AnsiballZ_copy.py'
Oct 02 19:11:35 compute-0 sudo[209570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:35 compute-0 python3.9[209572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432294.7219622-748-111087768439777/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:11:35 compute-0 sudo[209570]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:36 compute-0 sudo[209722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctyfnherxzfwysnkjwhvtrluxzdggsly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432296.2078376-765-35520136341384/AnsiballZ_container_config_data.py'
Oct 02 19:11:36 compute-0 sudo[209722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:36 compute-0 python3.9[209724]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Oct 02 19:11:36 compute-0 sudo[209722]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:37 compute-0 sudo[209874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvnjsmgbslljdxyvwxujpbytdoambqmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432296.9374459-774-171729818060580/AnsiballZ_container_config_hash.py'
Oct 02 19:11:37 compute-0 sudo[209874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:37 compute-0 python3.9[209876]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:11:37 compute-0 sudo[209874]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:38 compute-0 sudo[210026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dalxhyxdtpxqasegpuqnrbkjovzkpkrz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432297.8389735-784-231134032298847/AnsiballZ_edpm_container_manage.py'
Oct 02 19:11:38 compute-0 sudo[210026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:38 compute-0 python3[210028]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:11:40 compute-0 podman[210039]: 2025-10-02 19:11:40.738196409 +0000 UTC m=+2.197218999 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 02 19:11:40 compute-0 podman[210138]: 2025-10-02 19:11:40.8946627 +0000 UTC m=+0.055779102 container create a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.buildah.version=1.33.7, architecture=x86_64, vcs-type=git, name=ubi9-minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public)
Oct 02 19:11:40 compute-0 podman[210138]: 2025-10-02 19:11:40.858896574 +0000 UTC m=+0.020012966 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 02 19:11:40 compute-0 python3[210028]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 02 19:11:41 compute-0 sudo[210026]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:41 compute-0 sudo[210326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwzfnpzlhevihluweuolcembxpnzzcbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432301.233442-792-59237741266213/AnsiballZ_stat.py'
Oct 02 19:11:41 compute-0 sudo[210326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:41 compute-0 python3.9[210328]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:11:41 compute-0 sudo[210326]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:42 compute-0 sudo[210491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inqngvydfyqeyntumarkwiawkncgyiyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432302.0197096-801-224988086709611/AnsiballZ_file.py'
Oct 02 19:11:42 compute-0 sudo[210491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:42 compute-0 podman[210454]: 2025-10-02 19:11:42.391772412 +0000 UTC m=+0.074819611 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, managed_by=edpm_ansible)
Oct 02 19:11:42 compute-0 systemd[1]: 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b-4edef62d157507f2.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:11:42 compute-0 systemd[1]: 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b-4edef62d157507f2.service: Failed with result 'exit-code'.
Oct 02 19:11:42 compute-0 python3.9[210500]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:11:42 compute-0 sudo[210491]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:43 compute-0 sudo[210650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qixfjqtxfxcbtyeepqqhnndwswmdusjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432302.6646225-801-73922384902008/AnsiballZ_copy.py'
Oct 02 19:11:43 compute-0 sudo[210650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:43 compute-0 python3.9[210652]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432302.6646225-801-73922384902008/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:11:43 compute-0 sudo[210650]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:43 compute-0 sudo[210726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlullxclomrygwclzlcvbnekzhxvprft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432302.6646225-801-73922384902008/AnsiballZ_systemd.py'
Oct 02 19:11:43 compute-0 sudo[210726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:44 compute-0 python3.9[210728]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:11:44 compute-0 systemd[1]: Reloading.
Oct 02 19:11:44 compute-0 systemd-sysv-generator[210759]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:11:44 compute-0 systemd-rc-local-generator[210756]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:11:44 compute-0 sudo[210726]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:44 compute-0 sudo[210837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsgecofylaznctluixznkjtzrfiybcba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432302.6646225-801-73922384902008/AnsiballZ_systemd.py'
Oct 02 19:11:44 compute-0 sudo[210837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:45 compute-0 python3.9[210839]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:11:45 compute-0 systemd[1]: Reloading.
Oct 02 19:11:45 compute-0 systemd-rc-local-generator[210867]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:11:45 compute-0 systemd-sysv-generator[210870]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:11:45 compute-0 systemd[1]: Starting openstack_network_exporter container...
Oct 02 19:11:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf95c99b88b934919c45cb236b1fab63973103bb50bb0658856a63d1ece0fc6c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf95c99b88b934919c45cb236b1fab63973103bb50bb0658856a63d1ece0fc6c/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf95c99b88b934919c45cb236b1fab63973103bb50bb0658856a63d1ece0fc6c/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:45 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1.
Oct 02 19:11:45 compute-0 podman[210880]: 2025-10-02 19:11:45.723690858 +0000 UTC m=+0.128688210 container init a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 02 19:11:45 compute-0 openstack_network_exporter[210895]: INFO    19:11:45 main.go:48: registering *bridge.Collector
Oct 02 19:11:45 compute-0 openstack_network_exporter[210895]: INFO    19:11:45 main.go:48: registering *coverage.Collector
Oct 02 19:11:45 compute-0 openstack_network_exporter[210895]: INFO    19:11:45 main.go:48: registering *datapath.Collector
Oct 02 19:11:45 compute-0 openstack_network_exporter[210895]: INFO    19:11:45 main.go:48: registering *iface.Collector
Oct 02 19:11:45 compute-0 openstack_network_exporter[210895]: INFO    19:11:45 main.go:48: registering *memory.Collector
Oct 02 19:11:45 compute-0 openstack_network_exporter[210895]: INFO    19:11:45 main.go:48: registering *ovnnorthd.Collector
Oct 02 19:11:45 compute-0 openstack_network_exporter[210895]: INFO    19:11:45 main.go:48: registering *ovn.Collector
Oct 02 19:11:45 compute-0 openstack_network_exporter[210895]: INFO    19:11:45 main.go:48: registering *ovsdbserver.Collector
Oct 02 19:11:45 compute-0 openstack_network_exporter[210895]: INFO    19:11:45 main.go:48: registering *pmd_perf.Collector
Oct 02 19:11:45 compute-0 openstack_network_exporter[210895]: INFO    19:11:45 main.go:48: registering *pmd_rxq.Collector
Oct 02 19:11:45 compute-0 openstack_network_exporter[210895]: INFO    19:11:45 main.go:48: registering *vswitch.Collector
Oct 02 19:11:45 compute-0 openstack_network_exporter[210895]: NOTICE  19:11:45 main.go:76: listening on https://:9105/metrics
Oct 02 19:11:45 compute-0 podman[210880]: 2025-10-02 19:11:45.748795649 +0000 UTC m=+0.153792981 container start a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, version=9.6, release=1755695350, io.buildah.version=1.33.7, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:11:45 compute-0 podman[210880]: openstack_network_exporter
Oct 02 19:11:45 compute-0 systemd[1]: Started openstack_network_exporter container.
Oct 02 19:11:45 compute-0 sudo[210837]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:45 compute-0 podman[210905]: 2025-10-02 19:11:45.863866993 +0000 UTC m=+0.092440911 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350)
Oct 02 19:11:46 compute-0 sudo[211076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cosnbxyjndqiujqrvlwwjmmgwisfchbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432306.0630436-825-24645423088008/AnsiballZ_systemd.py'
Oct 02 19:11:46 compute-0 sudo[211076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:46 compute-0 python3.9[211078]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:11:46 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Oct 02 19:11:46 compute-0 systemd[1]: libpod-a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1.scope: Deactivated successfully.
Oct 02 19:11:46 compute-0 podman[211082]: 2025-10-02 19:11:46.898778346 +0000 UTC m=+0.051654151 container died a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Oct 02 19:11:46 compute-0 systemd[1]: a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1-7b1ad7172a8754e9.timer: Deactivated successfully.
Oct 02 19:11:46 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1.
Oct 02 19:11:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1-userdata-shm.mount: Deactivated successfully.
Oct 02 19:11:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf95c99b88b934919c45cb236b1fab63973103bb50bb0658856a63d1ece0fc6c-merged.mount: Deactivated successfully.
Oct 02 19:11:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:11:47.444 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:11:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:11:47.446 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:11:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:11:47.446 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:11:47 compute-0 podman[211082]: 2025-10-02 19:11:47.71970821 +0000 UTC m=+0.872583995 container cleanup a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm)
Oct 02 19:11:47 compute-0 podman[211082]: openstack_network_exporter
Oct 02 19:11:47 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 02 19:11:47 compute-0 podman[211112]: openstack_network_exporter
Oct 02 19:11:47 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Oct 02 19:11:47 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Oct 02 19:11:47 compute-0 systemd[1]: Starting openstack_network_exporter container...
Oct 02 19:11:47 compute-0 podman[211111]: 2025-10-02 19:11:47.811024809 +0000 UTC m=+0.064125804 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:11:47 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:11:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf95c99b88b934919c45cb236b1fab63973103bb50bb0658856a63d1ece0fc6c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf95c99b88b934919c45cb236b1fab63973103bb50bb0658856a63d1ece0fc6c/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf95c99b88b934919c45cb236b1fab63973103bb50bb0658856a63d1ece0fc6c/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:11:47 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1.
Oct 02 19:11:48 compute-0 podman[211143]: 2025-10-02 19:11:48.020294961 +0000 UTC m=+0.200184520 container init a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, vcs-type=git, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc.)
Oct 02 19:11:48 compute-0 openstack_network_exporter[211160]: INFO    19:11:48 main.go:48: registering *bridge.Collector
Oct 02 19:11:48 compute-0 openstack_network_exporter[211160]: INFO    19:11:48 main.go:48: registering *coverage.Collector
Oct 02 19:11:48 compute-0 openstack_network_exporter[211160]: INFO    19:11:48 main.go:48: registering *datapath.Collector
Oct 02 19:11:48 compute-0 openstack_network_exporter[211160]: INFO    19:11:48 main.go:48: registering *iface.Collector
Oct 02 19:11:48 compute-0 openstack_network_exporter[211160]: INFO    19:11:48 main.go:48: registering *memory.Collector
Oct 02 19:11:48 compute-0 openstack_network_exporter[211160]: INFO    19:11:48 main.go:48: registering *ovnnorthd.Collector
Oct 02 19:11:48 compute-0 openstack_network_exporter[211160]: INFO    19:11:48 main.go:48: registering *ovn.Collector
Oct 02 19:11:48 compute-0 openstack_network_exporter[211160]: INFO    19:11:48 main.go:48: registering *ovsdbserver.Collector
Oct 02 19:11:48 compute-0 openstack_network_exporter[211160]: INFO    19:11:48 main.go:48: registering *pmd_perf.Collector
Oct 02 19:11:48 compute-0 openstack_network_exporter[211160]: INFO    19:11:48 main.go:48: registering *pmd_rxq.Collector
Oct 02 19:11:48 compute-0 openstack_network_exporter[211160]: INFO    19:11:48 main.go:48: registering *vswitch.Collector
Oct 02 19:11:48 compute-0 openstack_network_exporter[211160]: NOTICE  19:11:48 main.go:76: listening on https://:9105/metrics
Oct 02 19:11:48 compute-0 podman[211143]: 2025-10-02 19:11:48.05732503 +0000 UTC m=+0.237214589 container start a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, name=ubi9-minimal, config_id=edpm, vendor=Red Hat, Inc.)
Oct 02 19:11:48 compute-0 podman[211143]: openstack_network_exporter
Oct 02 19:11:48 compute-0 systemd[1]: Started openstack_network_exporter container.
Oct 02 19:11:48 compute-0 sudo[211076]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:48 compute-0 podman[211170]: 2025-10-02 19:11:48.164559636 +0000 UTC m=+0.090391426 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=edpm, distribution-scope=public, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Oct 02 19:11:48 compute-0 sudo[211340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gymwgarbromlufgetrhfsqicoicbkphu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432308.351482-833-259988154710491/AnsiballZ_find.py'
Oct 02 19:11:48 compute-0 sudo[211340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:48 compute-0 python3.9[211342]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:11:49 compute-0 sudo[211340]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:49 compute-0 sudo[211492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxyihccqcrlbiasqbcdvilbkcdszjupy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432309.452199-843-15114739802915/AnsiballZ_podman_container_info.py'
Oct 02 19:11:49 compute-0 sudo[211492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:50 compute-0 python3.9[211494]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Oct 02 19:11:50 compute-0 sudo[211492]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:50 compute-0 sudo[211657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fayjknqrroyhfoxcbascavsyeobesgpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432310.367845-851-183637887443327/AnsiballZ_podman_container_exec.py'
Oct 02 19:11:50 compute-0 sudo[211657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:51 compute-0 python3.9[211659]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:11:51 compute-0 systemd[1]: Started libpod-conmon-d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2.scope.
Oct 02 19:11:51 compute-0 podman[211660]: 2025-10-02 19:11:51.290866929 +0000 UTC m=+0.147189804 container exec d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:11:51 compute-0 podman[211679]: 2025-10-02 19:11:51.361436044 +0000 UTC m=+0.054472756 container exec_died d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 19:11:51 compute-0 podman[211660]: 2025-10-02 19:11:51.368616436 +0000 UTC m=+0.224939311 container exec_died d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 19:11:51 compute-0 systemd[1]: libpod-conmon-d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2.scope: Deactivated successfully.
Oct 02 19:11:51 compute-0 sudo[211657]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:51 compute-0 sudo[211841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xemlxuuwtcelnwxolwcyyibioiswasja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432311.5862339-859-137588495039073/AnsiballZ_podman_container_exec.py'
Oct 02 19:11:51 compute-0 sudo[211841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:52 compute-0 python3.9[211843]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:11:52 compute-0 systemd[1]: Started libpod-conmon-d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2.scope.
Oct 02 19:11:52 compute-0 podman[211844]: 2025-10-02 19:11:52.199938309 +0000 UTC m=+0.094167337 container exec d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:11:52 compute-0 podman[211844]: 2025-10-02 19:11:52.231862792 +0000 UTC m=+0.126091830 container exec_died d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 19:11:52 compute-0 systemd[1]: libpod-conmon-d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2.scope: Deactivated successfully.
Oct 02 19:11:52 compute-0 podman[211862]: 2025-10-02 19:11:52.271352767 +0000 UTC m=+0.073206667 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 02 19:11:52 compute-0 sudo[211841]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:52 compute-0 sudo[212065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdbtqojkztdmhqobzetekdbziyoerglh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432312.4771545-867-180618986481543/AnsiballZ_file.py'
Oct 02 19:11:52 compute-0 sudo[212065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:52 compute-0 podman[212021]: 2025-10-02 19:11:52.808450598 +0000 UTC m=+0.087849638 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:11:53 compute-0 python3.9[212074]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:11:53 compute-0 sudo[212065]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:53 compute-0 sudo[212224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzqomgzbdyjlcqmzehhzfgcmzozbelta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432313.252802-876-262100441530211/AnsiballZ_podman_container_info.py'
Oct 02 19:11:53 compute-0 sudo[212224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:53 compute-0 python3.9[212226]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Oct 02 19:11:53 compute-0 sudo[212224]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:54 compute-0 sudo[212390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzmvmofjhbqwcmtleqrcmshkkctxteqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432314.1701756-884-19501903255693/AnsiballZ_podman_container_exec.py'
Oct 02 19:11:54 compute-0 sudo[212390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:54 compute-0 python3.9[212392]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:11:54 compute-0 systemd[1]: Started libpod-conmon-40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d.scope.
Oct 02 19:11:54 compute-0 podman[212393]: 2025-10-02 19:11:54.925609996 +0000 UTC m=+0.236868429 container exec 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 19:11:54 compute-0 podman[212393]: 2025-10-02 19:11:54.966487129 +0000 UTC m=+0.277745542 container exec_died 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 02 19:11:54 compute-0 systemd[1]: libpod-conmon-40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d.scope: Deactivated successfully.
Oct 02 19:11:55 compute-0 sudo[212390]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:55 compute-0 sudo[212574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emhlnjogrlkkknsyebqfvtzkwtbhcrld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432315.1907198-892-225194224406717/AnsiballZ_podman_container_exec.py'
Oct 02 19:11:55 compute-0 sudo[212574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:55 compute-0 python3.9[212576]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:11:55 compute-0 systemd[1]: Started libpod-conmon-40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d.scope.
Oct 02 19:11:55 compute-0 podman[212577]: 2025-10-02 19:11:55.788921494 +0000 UTC m=+0.083649136 container exec 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:11:55 compute-0 podman[212597]: 2025-10-02 19:11:55.861398 +0000 UTC m=+0.058634957 container exec_died 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:11:55 compute-0 podman[212577]: 2025-10-02 19:11:55.868294165 +0000 UTC m=+0.163021817 container exec_died 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:11:55 compute-0 systemd[1]: libpod-conmon-40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d.scope: Deactivated successfully.
Oct 02 19:11:55 compute-0 sudo[212574]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:56 compute-0 sudo[212778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aemzkxjtdjnmeeaatbmpnxsajolsxujl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432316.136912-900-157543993318782/AnsiballZ_file.py'
Oct 02 19:11:56 compute-0 sudo[212778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:56 compute-0 podman[212733]: 2025-10-02 19:11:56.514149092 +0000 UTC m=+0.076009563 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 02 19:11:56 compute-0 podman[212734]: 2025-10-02 19:11:56.524826297 +0000 UTC m=+0.086390420 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 19:11:56 compute-0 python3.9[212798]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:11:56 compute-0 sudo[212778]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:57 compute-0 sudo[212955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shuehnrswptszbuvxfytprmnupkvxnjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432316.9146974-909-83902116167120/AnsiballZ_podman_container_info.py'
Oct 02 19:11:57 compute-0 sudo[212955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:57 compute-0 python3.9[212957]: ansible-containers.podman.podman_container_info Invoked with name=['iscsid'] executable=podman
Oct 02 19:11:57 compute-0 sudo[212955]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:58 compute-0 sudo[213120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lysoldtodekailojmpavhnwckqegskux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432317.7419884-917-17294129794853/AnsiballZ_podman_container_exec.py'
Oct 02 19:11:58 compute-0 sudo[213120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:58 compute-0 python3.9[213122]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:11:58 compute-0 systemd[1]: Started libpod-conmon-e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd.scope.
Oct 02 19:11:58 compute-0 podman[213123]: 2025-10-02 19:11:58.520419228 +0000 UTC m=+0.164607309 container exec e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:11:58 compute-0 podman[213142]: 2025-10-02 19:11:58.586448823 +0000 UTC m=+0.052435373 container exec_died e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 19:11:58 compute-0 podman[213123]: 2025-10-02 19:11:58.602918293 +0000 UTC m=+0.247106374 container exec_died e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:11:58 compute-0 systemd[1]: libpod-conmon-e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd.scope: Deactivated successfully.
Oct 02 19:11:58 compute-0 sudo[213120]: pam_unix(sudo:session): session closed for user root
Oct 02 19:11:59 compute-0 sudo[213301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvpqzzjvndkrtqlidmateumcvcntxtdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432318.9107-925-271873824504587/AnsiballZ_podman_container_exec.py'
Oct 02 19:11:59 compute-0 sudo[213301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:11:59 compute-0 python3.9[213303]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:11:59 compute-0 systemd[1]: Started libpod-conmon-e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd.scope.
Oct 02 19:11:59 compute-0 podman[213304]: 2025-10-02 19:11:59.587115741 +0000 UTC m=+0.128876165 container exec e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:11:59 compute-0 podman[213324]: 2025-10-02 19:11:59.652339743 +0000 UTC m=+0.053960292 container exec_died e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:11:59 compute-0 podman[213304]: 2025-10-02 19:11:59.679649063 +0000 UTC m=+0.221409467 container exec_died e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:11:59 compute-0 systemd[1]: libpod-conmon-e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd.scope: Deactivated successfully.
Oct 02 19:11:59 compute-0 sudo[213301]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:00 compute-0 sudo[213489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qouknzyrautkwmlyosilfjbkilsbilay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432319.9199677-933-41680507986708/AnsiballZ_file.py'
Oct 02 19:12:00 compute-0 sudo[213489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:00 compute-0 python3.9[213491]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/iscsid recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:00 compute-0 sudo[213489]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:00 compute-0 sudo[213641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqozlpvnkxemvqobzgvypvdxqfkwqrvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432320.6655104-942-151516231774912/AnsiballZ_podman_container_info.py'
Oct 02 19:12:00 compute-0 sudo[213641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:01 compute-0 python3.9[213643]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Oct 02 19:12:01 compute-0 sudo[213641]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:01 compute-0 sudo[213806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvoefafzufdqwqizmmegtudbmfcdvsfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432321.4576805-950-123233713980070/AnsiballZ_podman_container_exec.py'
Oct 02 19:12:01 compute-0 sudo[213806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:02 compute-0 python3.9[213808]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:12:02 compute-0 systemd[1]: Started libpod-conmon-d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c.scope.
Oct 02 19:12:02 compute-0 podman[213809]: 2025-10-02 19:12:02.110835503 +0000 UTC m=+0.080462234 container exec d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:12:02 compute-0 podman[213809]: 2025-10-02 19:12:02.116096003 +0000 UTC m=+0.085722694 container exec_died d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 19:12:02 compute-0 sudo[213806]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:02 compute-0 systemd[1]: libpod-conmon-d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c.scope: Deactivated successfully.
Oct 02 19:12:02 compute-0 sudo[213990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veuhqgrzpoplxyuiakipfmwtzmlchtco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432322.3320076-958-168503471566238/AnsiballZ_podman_container_exec.py'
Oct 02 19:12:02 compute-0 sudo[213990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:02 compute-0 python3.9[213992]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:12:02 compute-0 systemd[1]: Started libpod-conmon-d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c.scope.
Oct 02 19:12:02 compute-0 podman[213993]: 2025-10-02 19:12:02.916197993 +0000 UTC m=+0.067868127 container exec d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct 02 19:12:02 compute-0 podman[213993]: 2025-10-02 19:12:02.946671868 +0000 UTC m=+0.098342002 container exec_died d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 19:12:02 compute-0 systemd[1]: libpod-conmon-d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c.scope: Deactivated successfully.
Oct 02 19:12:02 compute-0 sudo[213990]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:03 compute-0 sudo[214174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akcplowjpulfcalypxesqctgrwdaipzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432323.1738288-966-26111248725377/AnsiballZ_file.py'
Oct 02 19:12:03 compute-0 sudo[214174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:03 compute-0 python3.9[214176]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:03 compute-0 sudo[214174]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:04 compute-0 sudo[214326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgqdbbpgwyqfwxxlvlfgnnmlpqsjajhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432324.0657635-975-266871396710603/AnsiballZ_podman_container_info.py'
Oct 02 19:12:04 compute-0 sudo[214326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:04 compute-0 python3.9[214328]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.620 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.622 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.647 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.647 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.647 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.677 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.677 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.677 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.678 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:12:04 compute-0 sudo[214326]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:04 compute-0 podman[214330]: 2025-10-02 19:12:04.702027548 +0000 UTC m=+0.071336399 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.806 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.807 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5810MB free_disk=72.5645866394043GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.807 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.807 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.890 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.891 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.913 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.927 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.930 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:12:04 compute-0 nova_compute[194781]: 2025-10-02 19:12:04.930 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:12:05 compute-0 sudo[214514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efhgchchbryqdnvyhvpevdnffygrzwtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432324.8862-983-75731141949591/AnsiballZ_podman_container_exec.py'
Oct 02 19:12:05 compute-0 sudo[214514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:05 compute-0 nova_compute[194781]: 2025-10-02 19:12:05.317 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:12:05 compute-0 nova_compute[194781]: 2025-10-02 19:12:05.318 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:12:05 compute-0 nova_compute[194781]: 2025-10-02 19:12:05.318 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:12:05 compute-0 nova_compute[194781]: 2025-10-02 19:12:05.342 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:12:05 compute-0 nova_compute[194781]: 2025-10-02 19:12:05.342 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:12:05 compute-0 python3.9[214516]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:12:05 compute-0 systemd[1]: Started libpod-conmon-29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b.scope.
Oct 02 19:12:05 compute-0 podman[214517]: 2025-10-02 19:12:05.576943979 +0000 UTC m=+0.147059736 container exec 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Oct 02 19:12:05 compute-0 podman[214517]: 2025-10-02 19:12:05.625601261 +0000 UTC m=+0.195716968 container exec_died 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct 02 19:12:05 compute-0 systemd[1]: libpod-conmon-29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b.scope: Deactivated successfully.
Oct 02 19:12:05 compute-0 sudo[214514]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:06 compute-0 nova_compute[194781]: 2025-10-02 19:12:06.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:12:06 compute-0 nova_compute[194781]: 2025-10-02 19:12:06.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:12:06 compute-0 nova_compute[194781]: 2025-10-02 19:12:06.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:12:06 compute-0 nova_compute[194781]: 2025-10-02 19:12:06.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:12:06 compute-0 sudo[214697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yedohfmpzfcbfsqikkkcttmjrpuzerll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432325.9202394-991-223900125581406/AnsiballZ_podman_container_exec.py'
Oct 02 19:12:06 compute-0 sudo[214697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:06 compute-0 python3.9[214699]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:12:06 compute-0 systemd[1]: Started libpod-conmon-29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b.scope.
Oct 02 19:12:06 compute-0 podman[214700]: 2025-10-02 19:12:06.897681139 +0000 UTC m=+0.328622884 container exec 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:12:07 compute-0 podman[214720]: 2025-10-02 19:12:07.089913803 +0000 UTC m=+0.179755821 container exec_died 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 19:12:07 compute-0 podman[214700]: 2025-10-02 19:12:07.149772445 +0000 UTC m=+0.580714220 container exec_died 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:12:07 compute-0 systemd[1]: libpod-conmon-29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b.scope: Deactivated successfully.
Oct 02 19:12:07 compute-0 sudo[214697]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:07 compute-0 sudo[214883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbwfojertlexzampsuohhiqxjjtqklvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432327.3861752-999-152590478510028/AnsiballZ_file.py'
Oct 02 19:12:07 compute-0 sudo[214883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:07 compute-0 python3.9[214885]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:07 compute-0 sudo[214883]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:08 compute-0 sudo[215035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuakqaeiecjtazlkkgmbfrpenhrpjwst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432328.161044-1008-148137325242395/AnsiballZ_podman_container_info.py'
Oct 02 19:12:08 compute-0 sudo[215035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:08 compute-0 python3.9[215037]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Oct 02 19:12:08 compute-0 sudo[215035]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:09 compute-0 sudo[215201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kswzcbtdthqyxncfrqvdyzjpnuwxaxks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432329.011711-1016-103380009820195/AnsiballZ_podman_container_exec.py'
Oct 02 19:12:09 compute-0 sudo[215201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:09 compute-0 python3.9[215203]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:12:09 compute-0 systemd[1]: Started libpod-conmon-61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4.scope.
Oct 02 19:12:09 compute-0 podman[215204]: 2025-10-02 19:12:09.66103336 +0000 UTC m=+0.078189603 container exec 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:12:09 compute-0 podman[215204]: 2025-10-02 19:12:09.695686127 +0000 UTC m=+0.112842310 container exec_died 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:12:09 compute-0 systemd[1]: libpod-conmon-61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4.scope: Deactivated successfully.
Oct 02 19:12:09 compute-0 sudo[215201]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:10 compute-0 sudo[215384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amufghiqpqyiwdpsvinrtszcyzrxvhxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432329.948521-1024-94184559341310/AnsiballZ_podman_container_exec.py'
Oct 02 19:12:10 compute-0 sudo[215384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:10 compute-0 python3.9[215386]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:12:10 compute-0 systemd[1]: Started libpod-conmon-61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4.scope.
Oct 02 19:12:10 compute-0 podman[215387]: 2025-10-02 19:12:10.638229568 +0000 UTC m=+0.081805990 container exec 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:12:10 compute-0 podman[215387]: 2025-10-02 19:12:10.667585144 +0000 UTC m=+0.111161596 container exec_died 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:12:10 compute-0 systemd[1]: libpod-conmon-61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4.scope: Deactivated successfully.
Oct 02 19:12:10 compute-0 sudo[215384]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:11 compute-0 sudo[215569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgycynssfvklcnywcmohovipadixjxod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432330.928004-1032-196805385868656/AnsiballZ_file.py'
Oct 02 19:12:11 compute-0 sudo[215569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:11 compute-0 python3.9[215571]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:11 compute-0 sudo[215569]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:12 compute-0 sudo[215721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aijalcwzgeikifghmuyhxxrbueqbekil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432331.731291-1041-42881219701773/AnsiballZ_podman_container_info.py'
Oct 02 19:12:12 compute-0 sudo[215721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:12 compute-0 python3.9[215723]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Oct 02 19:12:12 compute-0 sudo[215721]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:12 compute-0 podman[215761]: 2025-10-02 19:12:12.728973141 +0000 UTC m=+0.101194088 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:12:13 compute-0 sudo[215906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrgfpyquloxthawgfcguaycjrqzkvhsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432332.6882818-1049-147623708136887/AnsiballZ_podman_container_exec.py'
Oct 02 19:12:13 compute-0 sudo[215906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:13 compute-0 python3.9[215908]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:12:13 compute-0 systemd[1]: Started libpod-conmon-723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62.scope.
Oct 02 19:12:13 compute-0 podman[215909]: 2025-10-02 19:12:13.405376141 +0000 UTC m=+0.111028222 container exec 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:12:13 compute-0 podman[215909]: 2025-10-02 19:12:13.438526548 +0000 UTC m=+0.144178569 container exec_died 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:12:13 compute-0 systemd[1]: libpod-conmon-723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62.scope: Deactivated successfully.
Oct 02 19:12:13 compute-0 sudo[215906]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:14 compute-0 sudo[216089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmwkcpghcxalcbdncxhhmfulvmfyrfzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432333.7404146-1057-144504320688345/AnsiballZ_podman_container_exec.py'
Oct 02 19:12:14 compute-0 sudo[216089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:14 compute-0 python3.9[216091]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:12:14 compute-0 systemd[1]: Started libpod-conmon-723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62.scope.
Oct 02 19:12:14 compute-0 podman[216092]: 2025-10-02 19:12:14.50406135 +0000 UTC m=+0.095003014 container exec 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:12:14 compute-0 podman[216092]: 2025-10-02 19:12:14.537746801 +0000 UTC m=+0.128688405 container exec_died 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:12:14 compute-0 systemd[1]: libpod-conmon-723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62.scope: Deactivated successfully.
Oct 02 19:12:14 compute-0 sudo[216089]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:15 compute-0 sudo[216273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vchiimepfojueojieepbjpklsrgwktyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432334.8553886-1065-63250230417278/AnsiballZ_file.py'
Oct 02 19:12:15 compute-0 sudo[216273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:15 compute-0 python3.9[216275]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:15 compute-0 sudo[216273]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:16 compute-0 sudo[216425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkqongredaecjhdyilictpczncbucoja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432335.7675483-1074-135834766636343/AnsiballZ_podman_container_info.py'
Oct 02 19:12:16 compute-0 sudo[216425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:16 compute-0 python3.9[216427]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Oct 02 19:12:16 compute-0 sudo[216425]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:17 compute-0 sudo[216591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raneksvrqejbbvlvszpwyizvjspkutuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432336.7089221-1082-219227421961569/AnsiballZ_podman_container_exec.py'
Oct 02 19:12:17 compute-0 sudo[216591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:17 compute-0 python3.9[216593]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:12:17 compute-0 systemd[1]: Started libpod-conmon-a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1.scope.
Oct 02 19:12:17 compute-0 podman[216594]: 2025-10-02 19:12:17.359135485 +0000 UTC m=+0.082637552 container exec a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Oct 02 19:12:17 compute-0 podman[216594]: 2025-10-02 19:12:17.38958632 +0000 UTC m=+0.113088357 container exec_died a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm)
Oct 02 19:12:17 compute-0 sudo[216591]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:17 compute-0 systemd[1]: libpod-conmon-a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1.scope: Deactivated successfully.
Oct 02 19:12:17 compute-0 sudo[216783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxfuymzlohpfmspbssziqdexowcmctov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432337.6312766-1090-188597538199132/AnsiballZ_podman_container_exec.py'
Oct 02 19:12:17 compute-0 sudo[216783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:18 compute-0 podman[216748]: 2025-10-02 19:12:18.008507821 +0000 UTC m=+0.083710031 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:12:18 compute-0 python3.9[216793]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:12:18 compute-0 systemd[1]: Started libpod-conmon-a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1.scope.
Oct 02 19:12:18 compute-0 podman[216797]: 2025-10-02 19:12:18.414994318 +0000 UTC m=+0.130320148 container exec a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible)
Oct 02 19:12:18 compute-0 podman[216797]: 2025-10-02 19:12:18.474635534 +0000 UTC m=+0.189961374 container exec_died a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9)
Oct 02 19:12:18 compute-0 podman[216814]: 2025-10-02 19:12:18.52044886 +0000 UTC m=+0.103154981 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, name=ubi9-minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container)
Oct 02 19:12:18 compute-0 systemd[1]: libpod-conmon-a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1.scope: Deactivated successfully.
Oct 02 19:12:18 compute-0 sudo[216783]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:19 compute-0 sudo[216999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdzgvdidfpugtkidauqrtynbsmjgrblb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432338.7415195-1098-70563989317446/AnsiballZ_file.py'
Oct 02 19:12:19 compute-0 sudo[216999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:19 compute-0 python3.9[217001]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:19 compute-0 sudo[216999]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:19 compute-0 sudo[217151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-attjhcjaerkixembwyobgijqmuxxvkxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432339.6253731-1107-47523732195473/AnsiballZ_file.py'
Oct 02 19:12:19 compute-0 sudo[217151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:20 compute-0 python3.9[217153]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:20 compute-0 sudo[217151]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:20 compute-0 sudo[217303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqbwgokidvpsznmnlsrkdcuwoxwhmwlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432340.3690886-1115-202175942521702/AnsiballZ_stat.py'
Oct 02 19:12:20 compute-0 sudo[217303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:21 compute-0 python3.9[217305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:12:21 compute-0 sudo[217303]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:21 compute-0 sudo[217426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhcfofxdjgodczizdwcaayvfoajxejan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432340.3690886-1115-202175942521702/AnsiballZ_copy.py'
Oct 02 19:12:21 compute-0 sudo[217426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:21 compute-0 python3.9[217428]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432340.3690886-1115-202175942521702/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:21 compute-0 sudo[217426]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:22 compute-0 sudo[217578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqxycdjkgdrnglekleyrnrirenbwnoht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432342.0051167-1131-38833924093504/AnsiballZ_file.py'
Oct 02 19:12:22 compute-0 sudo[217578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:22 compute-0 podman[217580]: 2025-10-02 19:12:22.408558547 +0000 UTC m=+0.060941302 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 02 19:12:22 compute-0 python3.9[217581]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:22 compute-0 sudo[217578]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:23 compute-0 sudo[217763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybgymljfhemssqwuluemkdxppgevzaom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432342.7840161-1139-189448566865837/AnsiballZ_stat.py'
Oct 02 19:12:23 compute-0 sudo[217763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:23 compute-0 podman[217723]: 2025-10-02 19:12:23.156606672 +0000 UTC m=+0.076092166 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:12:23 compute-0 python3.9[217774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:12:23 compute-0 sudo[217763]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:23 compute-0 sudo[217850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvnfgipgolbnaivzpkftxuqxagkrcmzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432342.7840161-1139-189448566865837/AnsiballZ_file.py'
Oct 02 19:12:23 compute-0 sudo[217850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:23 compute-0 python3.9[217852]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:23 compute-0 sudo[217850]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:24 compute-0 sudo[218002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yutslbecalgtycygkynwzrnttsepahhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432344.0736144-1151-48413452744874/AnsiballZ_stat.py'
Oct 02 19:12:24 compute-0 sudo[218002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:24 compute-0 python3.9[218004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:12:24 compute-0 sudo[218002]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:24 compute-0 sudo[218080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyebozyrqhxpjcjdatvihoidttuxfuuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432344.0736144-1151-48413452744874/AnsiballZ_file.py'
Oct 02 19:12:24 compute-0 sudo[218080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:25 compute-0 python3.9[218082]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8wwu8yzw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:25 compute-0 sudo[218080]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:25 compute-0 sudo[218232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpugauxdyvpaaykfojsxykrdwxtcjysx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432345.3409963-1163-148721012433125/AnsiballZ_stat.py'
Oct 02 19:12:25 compute-0 sudo[218232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:25 compute-0 python3.9[218234]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:12:25 compute-0 sudo[218232]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:26 compute-0 sudo[218310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anojslrgckewjqxhmvzcgowtfpcwvgaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432345.3409963-1163-148721012433125/AnsiballZ_file.py'
Oct 02 19:12:26 compute-0 sudo[218310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:26 compute-0 python3.9[218312]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:26 compute-0 sudo[218310]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:26 compute-0 podman[218337]: 2025-10-02 19:12:26.684472932 +0000 UTC m=+0.055744913 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:12:26 compute-0 podman[218338]: 2025-10-02 19:12:26.744858866 +0000 UTC m=+0.117160965 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 19:12:27 compute-0 sudo[218504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsvzhwieklzgaociqebuyrakqcporjkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432346.698028-1176-48491528949844/AnsiballZ_command.py'
Oct 02 19:12:27 compute-0 sudo[218504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:27 compute-0 python3.9[218506]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:12:27 compute-0 sudo[218504]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:28 compute-0 sudo[218657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbuetycpcmxboepmowyvjwdfqqcrrpff ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432347.4884398-1184-268748713397756/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 19:12:28 compute-0 sudo[218657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:28 compute-0 python3[218659]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 19:12:28 compute-0 sudo[218657]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:28 compute-0 sudo[218809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xufuzcdhovoaqhmdquyqsrmoyuncizst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432348.4521096-1192-163146033374066/AnsiballZ_stat.py'
Oct 02 19:12:28 compute-0 sudo[218809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:29 compute-0 python3.9[218811]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:12:29 compute-0 sudo[218809]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:29 compute-0 sudo[218887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmnxcwqwpagdltfkobbbpfqbfodguftx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432348.4521096-1192-163146033374066/AnsiballZ_file.py'
Oct 02 19:12:29 compute-0 sudo[218887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:29 compute-0 python3.9[218889]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:29 compute-0 sudo[218887]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:30 compute-0 sudo[219039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otjgvxqqfbbumfcuiwsafbzyyhfyassn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432349.778255-1204-104530480447821/AnsiballZ_stat.py'
Oct 02 19:12:30 compute-0 sudo[219039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:30 compute-0 python3.9[219041]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:12:30 compute-0 sudo[219039]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:30 compute-0 sudo[219117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnjwjpsvmwyljecxuxndrylywfbibhhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432349.778255-1204-104530480447821/AnsiballZ_file.py'
Oct 02 19:12:30 compute-0 sudo[219117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:30 compute-0 python3.9[219119]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:30 compute-0 sudo[219117]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:31 compute-0 sudo[219269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pihyuwvxoghdpattylouneyojgvkxkgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432351.1462188-1216-233745977034704/AnsiballZ_stat.py'
Oct 02 19:12:31 compute-0 sudo[219269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:31 compute-0 python3.9[219271]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:12:31 compute-0 sudo[219269]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:32 compute-0 sudo[219347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgrbqcksmdkznifwhjlhrtwrahuytuzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432351.1462188-1216-233745977034704/AnsiballZ_file.py'
Oct 02 19:12:32 compute-0 sudo[219347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:32 compute-0 python3.9[219349]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:32 compute-0 sudo[219347]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:32 compute-0 sudo[219499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqyoritnexnzzwktaswigrxfusbldxzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432352.4296-1228-39999673788874/AnsiballZ_stat.py'
Oct 02 19:12:32 compute-0 sudo[219499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:32 compute-0 python3.9[219501]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:12:32 compute-0 sudo[219499]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:33 compute-0 sudo[219577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldypkdopahshqukedwhlapqytzyocqvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432352.4296-1228-39999673788874/AnsiballZ_file.py'
Oct 02 19:12:33 compute-0 sudo[219577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:33 compute-0 python3.9[219579]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:33 compute-0 sudo[219577]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:33 compute-0 sudo[219729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rymmjdncqxvwezyocyzlrpxfpqnbwjmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432353.5982454-1240-150883982533098/AnsiballZ_stat.py'
Oct 02 19:12:33 compute-0 sudo[219729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:34 compute-0 python3.9[219731]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:12:34 compute-0 sudo[219729]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:34 compute-0 sudo[219854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oktvigncncpinuwjypwnxorljrjuxfvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432353.5982454-1240-150883982533098/AnsiballZ_copy.py'
Oct 02 19:12:34 compute-0 sudo[219854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:34 compute-0 python3.9[219856]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759432353.5982454-1240-150883982533098/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:34 compute-0 sudo[219854]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:34 compute-0 podman[219857]: 2025-10-02 19:12:34.855936772 +0000 UTC m=+0.057411287 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:12:35 compute-0 sudo[220029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fguzbkmagqektzebirympsrykwdhvhuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432354.9861221-1255-18154835083592/AnsiballZ_file.py'
Oct 02 19:12:35 compute-0 sudo[220029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:35 compute-0 python3.9[220031]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:35 compute-0 sudo[220029]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:36 compute-0 sudo[220181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khhfzxizlwzaboiaiwcldnylvfszwdlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432355.7831867-1263-226753002303820/AnsiballZ_command.py'
Oct 02 19:12:36 compute-0 sudo[220181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:36 compute-0 python3.9[220183]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:12:36 compute-0 sudo[220181]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:37 compute-0 sudo[220336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfhhbeifxrsnrukijwrobamqwjunkmkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432356.5928822-1271-259318578216462/AnsiballZ_blockinfile.py'
Oct 02 19:12:37 compute-0 sudo[220336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:37 compute-0 python3.9[220338]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:37 compute-0 sudo[220336]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:37 compute-0 sudo[220488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlrorwzmiijhjfvvurpdoddqyjjofvxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432357.6189973-1280-225237362459469/AnsiballZ_command.py'
Oct 02 19:12:37 compute-0 sudo[220488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:38 compute-0 python3.9[220490]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:12:38 compute-0 sudo[220488]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:38 compute-0 sudo[220641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmwcpnppkkvrzxptfkydcjqpyjmpcgxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432358.4232385-1288-120493603477248/AnsiballZ_stat.py'
Oct 02 19:12:38 compute-0 sudo[220641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:38 compute-0 python3.9[220643]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:12:39 compute-0 sudo[220641]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:39 compute-0 sudo[220795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dazalrebkgqpfgizdxxjuslmqnmcrwxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432359.2066321-1296-85191946846090/AnsiballZ_command.py'
Oct 02 19:12:39 compute-0 sudo[220795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:39 compute-0 python3.9[220797]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:12:39 compute-0 sudo[220795]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:40 compute-0 sudo[220950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zynxfiljupsjpwfpvdsasowknxblsnjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432360.0537398-1304-215156650007338/AnsiballZ_file.py'
Oct 02 19:12:40 compute-0 sudo[220950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:40 compute-0 python3.9[220952]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:40 compute-0 sudo[220950]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:41 compute-0 sshd-session[195085]: Connection closed by 192.168.122.30 port 57986
Oct 02 19:12:41 compute-0 sshd-session[195082]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:12:41 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Oct 02 19:12:41 compute-0 systemd[1]: session-27.scope: Consumed 1min 50.269s CPU time.
Oct 02 19:12:41 compute-0 systemd-logind[798]: Session 27 logged out. Waiting for processes to exit.
Oct 02 19:12:41 compute-0 systemd-logind[798]: Removed session 27.
Oct 02 19:12:43 compute-0 podman[220977]: 2025-10-02 19:12:43.719698628 +0000 UTC m=+0.090059401 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Oct 02 19:12:46 compute-0 sshd-session[220997]: Accepted publickey for zuul from 192.168.122.30 port 46598 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 19:12:46 compute-0 systemd-logind[798]: New session 28 of user zuul.
Oct 02 19:12:46 compute-0 systemd[1]: Started Session 28 of User zuul.
Oct 02 19:12:46 compute-0 sshd-session[220997]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:12:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:12:47.446 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:12:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:12:47.448 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:12:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:12:47.448 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:12:47 compute-0 sudo[221150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyunszoyjaqrcpkebhajblrxnzzbenry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432366.7618272-24-152689430234871/AnsiballZ_systemd_service.py'
Oct 02 19:12:47 compute-0 sudo[221150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:47 compute-0 python3.9[221152]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:12:47 compute-0 systemd[1]: Reloading.
Oct 02 19:12:48 compute-0 systemd-sysv-generator[221186]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:12:48 compute-0 systemd-rc-local-generator[221182]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:12:48 compute-0 sudo[221150]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:48 compute-0 podman[221190]: 2025-10-02 19:12:48.449165248 +0000 UTC m=+0.101710373 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:12:48 compute-0 podman[221288]: 2025-10-02 19:12:48.706067862 +0000 UTC m=+0.077957277 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:12:49 compute-0 python3.9[221382]: ansible-ansible.builtin.service_facts Invoked
Oct 02 19:12:49 compute-0 network[221399]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 19:12:49 compute-0 network[221400]: 'network-scripts' will be removed from distribution in near future.
Oct 02 19:12:49 compute-0 network[221401]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 19:12:52 compute-0 podman[221516]: 2025-10-02 19:12:52.588904649 +0000 UTC m=+0.116439047 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:12:53 compute-0 sudo[221706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyniqsjxfbddreccbgrwmkadjnzorfor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432373.0496945-47-108114458190366/AnsiballZ_systemd_service.py'
Oct 02 19:12:53 compute-0 podman[221670]: 2025-10-02 19:12:53.462845654 +0000 UTC m=+0.065198726 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:12:53 compute-0 sudo[221706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:53 compute-0 python3.9[221722]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:12:53 compute-0 sudo[221706]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:54 compute-0 sudo[221873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coewmrxbldsvjqsrsjgmeyrfofjxadfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432374.169008-57-182905021766911/AnsiballZ_file.py'
Oct 02 19:12:54 compute-0 sudo[221873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:54 compute-0 python3.9[221875]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:54 compute-0 sudo[221873]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:55 compute-0 sudo[222025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glcmkhqulqysipfasluxhxnhqnengnyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432375.1837-65-152410654848296/AnsiballZ_file.py'
Oct 02 19:12:55 compute-0 sudo[222025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:55 compute-0 python3.9[222027]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:12:55 compute-0 sudo[222025]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:56 compute-0 sudo[222177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffoxnsoobihjhiuhmfoxcebfxxbwwkec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432376.099061-74-210613112666045/AnsiballZ_command.py'
Oct 02 19:12:56 compute-0 sudo[222177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:56 compute-0 python3.9[222179]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:12:56 compute-0 sudo[222177]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:56 compute-0 podman[222182]: 2025-10-02 19:12:56.988458042 +0000 UTC m=+0.103699996 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 19:12:57 compute-0 podman[222183]: 2025-10-02 19:12:57.009645899 +0000 UTC m=+0.126007673 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 02 19:12:57 compute-0 python3.9[222373]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:12:58 compute-0 sudo[222523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifbkmevazbhekpxxqspizigihyicgzsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432378.265225-92-191672768755200/AnsiballZ_systemd_service.py'
Oct 02 19:12:58 compute-0 sudo[222523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:12:58 compute-0 python3.9[222525]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:12:59 compute-0 systemd[1]: Reloading.
Oct 02 19:12:59 compute-0 systemd-sysv-generator[222556]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:12:59 compute-0 systemd-rc-local-generator[222551]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:12:59 compute-0 sudo[222523]: pam_unix(sudo:session): session closed for user root
Oct 02 19:12:59 compute-0 podman[209015]: time="2025-10-02T19:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:12:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 24999 "" "Go-http-client/1.1"
Oct 02 19:12:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3818 "" "Go-http-client/1.1"
Oct 02 19:12:59 compute-0 sudo[222711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eweplezsfmlelvlngmyfwqxncpwtixgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432379.5520015-100-262098883606133/AnsiballZ_command.py'
Oct 02 19:12:59 compute-0 sudo[222711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:00 compute-0 python3.9[222713]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:13:00 compute-0 sudo[222711]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:00 compute-0 sudo[222864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddcqkweojevhudcxshrbvjateyopehuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432380.4455686-109-8694314346486/AnsiballZ_file.py'
Oct 02 19:13:00 compute-0 sudo[222864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:01 compute-0 python3.9[222866]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:13:01 compute-0 sudo[222864]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:01 compute-0 openstack_network_exporter[211160]: ERROR   19:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:13:01 compute-0 openstack_network_exporter[211160]: ERROR   19:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:13:01 compute-0 openstack_network_exporter[211160]: ERROR   19:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:13:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:13:01 compute-0 openstack_network_exporter[211160]: ERROR   19:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:13:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:13:01 compute-0 openstack_network_exporter[211160]: ERROR   19:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:13:01 compute-0 python3.9[223021]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:13:02 compute-0 python3.9[223173]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:13:03 compute-0 python3.9[223294]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432382.1857152-125-85002486396245/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:13:04 compute-0 nova_compute[194781]: 2025-10-02 19:13:04.036 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:13:04 compute-0 sudo[223444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrbxmrbecxkmugtthpzbuddtdmrqgpww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432383.9357944-143-133256522901573/AnsiballZ_getent.py'
Oct 02 19:13:04 compute-0 sudo[223444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:04 compute-0 python3.9[223446]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Oct 02 19:13:04 compute-0 sudo[223444]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.028 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.032 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.052 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.053 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.054 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.090 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.090 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.091 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.091 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.250 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.251 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5814MB free_disk=72.56378936767578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.252 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.252 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.337 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.337 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.366 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.385 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.387 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:13:05 compute-0 nova_compute[194781]: 2025-10-02 19:13:05.387 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:13:05 compute-0 podman[223512]: 2025-10-02 19:13:05.706134599 +0000 UTC m=+0.081813140 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:13:06 compute-0 python3.9[223621]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:13:06 compute-0 nova_compute[194781]: 2025-10-02 19:13:06.367 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:13:06 compute-0 nova_compute[194781]: 2025-10-02 19:13:06.368 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:13:06 compute-0 nova_compute[194781]: 2025-10-02 19:13:06.368 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:13:06 compute-0 python3.9[223742]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759432385.5570045-171-12723420651054/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:07 compute-0 python3.9[223892]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:13:08 compute-0 nova_compute[194781]: 2025-10-02 19:13:08.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:13:08 compute-0 nova_compute[194781]: 2025-10-02 19:13:08.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:13:08 compute-0 python3.9[224013]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759432386.8570683-171-131687256967950/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:08 compute-0 python3.9[224163]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:13:09 compute-0 python3.9[224284]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759432388.2285166-171-248502101718354/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:10 compute-0 python3.9[224434]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:13:10 compute-0 python3.9[224586]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:13:11 compute-0 python3.9[224738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:13:12 compute-0 python3.9[224859]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432391.1476667-230-215720752771685/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.936 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.937 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.937 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.938 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.941 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.941 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.942 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.943 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.944 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.945 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.946 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.946 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.947 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.949 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.950 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.950 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.951 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.951 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.951 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.951 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.952 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.952 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.952 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.952 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.953 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.953 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.953 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.953 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.954 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.954 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.954 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.955 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.955 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.955 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:13:12.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:13:13 compute-0 python3.9[225009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:13:13 compute-0 python3.9[225086]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:14 compute-0 podman[225210]: 2025-10-02 19:13:14.140450172 +0000 UTC m=+0.087814670 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Oct 02 19:13:14 compute-0 python3.9[225246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:13:14 compute-0 python3.9[225378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432393.6997974-230-79597799198979/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:15 compute-0 python3.9[225528]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:13:16 compute-0 python3.9[225649]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432395.037772-230-162474050930827/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:16 compute-0 python3.9[225799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:13:17 compute-0 python3.9[225920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432396.432927-230-243741776009112/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:18 compute-0 python3.9[226070]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:13:18 compute-0 podman[226165]: 2025-10-02 19:13:18.602653485 +0000 UTC m=+0.063087391 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:13:18 compute-0 python3.9[226209]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432397.6881132-230-180557614838737/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:18 compute-0 podman[226212]: 2025-10-02 19:13:18.88264754 +0000 UTC m=+0.065605227 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7)
Oct 02 19:13:19 compute-0 python3.9[226381]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:13:20 compute-0 python3.9[226458]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:20 compute-0 sudo[226608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfhxhpaavkgtwhuutglvyouhadutyybz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432400.2896717-325-235012295925780/AnsiballZ_file.py'
Oct 02 19:13:20 compute-0 sudo[226608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:20 compute-0 python3.9[226610]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:20 compute-0 sudo[226608]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:21 compute-0 sudo[226760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lregqebfzsestzxtsrenluhmxmijseuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432401.0972292-333-84960570487907/AnsiballZ_file.py'
Oct 02 19:13:21 compute-0 sudo[226760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:21 compute-0 python3.9[226762]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:21 compute-0 sudo[226760]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:22 compute-0 sudo[226912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmzbravjcpgtdkbpwzaghzomiwqzzhai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432401.8764462-341-251693875816267/AnsiballZ_file.py'
Oct 02 19:13:22 compute-0 sudo[226912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:22 compute-0 python3.9[226914]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:13:22 compute-0 sudo[226912]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:22 compute-0 sudo[227077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytxsfpfpymmunlevbnuniyebdfqnxeoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432402.6609192-349-26850594141723/AnsiballZ_stat.py'
Oct 02 19:13:22 compute-0 sudo[227077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:22 compute-0 podman[227038]: 2025-10-02 19:13:22.990993635 +0000 UTC m=+0.082445709 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:13:23 compute-0 python3.9[227085]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:13:23 compute-0 sudo[227077]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:23 compute-0 sudo[227222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiwkfuqqscfcqmpuvcxbjlszanbbuutf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432402.6609192-349-26850594141723/AnsiballZ_copy.py'
Oct 02 19:13:23 compute-0 sudo[227222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:23 compute-0 podman[227181]: 2025-10-02 19:13:23.573069886 +0000 UTC m=+0.056622640 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:13:23 compute-0 python3.9[227233]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432402.6609192-349-26850594141723/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:13:23 compute-0 sudo[227222]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:23 compute-0 sudo[227307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djtddcnjrlwhroyaidqlujkuuqzpowyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432402.6609192-349-26850594141723/AnsiballZ_stat.py'
Oct 02 19:13:23 compute-0 sudo[227307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:24 compute-0 python3.9[227309]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:13:24 compute-0 sudo[227307]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:24 compute-0 sudo[227430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeznyrhjmqqzigcwihrwmtnjpqdxdtwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432402.6609192-349-26850594141723/AnsiballZ_copy.py'
Oct 02 19:13:24 compute-0 sudo[227430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:24 compute-0 python3.9[227432]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432402.6609192-349-26850594141723/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:13:24 compute-0 sudo[227430]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:25 compute-0 sudo[227582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzdythywenerpewnpcrrwejikhuwvrjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432405.0599163-349-103125563592289/AnsiballZ_stat.py'
Oct 02 19:13:25 compute-0 sudo[227582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:25 compute-0 python3.9[227584]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:13:25 compute-0 sudo[227582]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:26 compute-0 sudo[227705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzamznidgmkieqaxpikfilfpdswaatlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432405.0599163-349-103125563592289/AnsiballZ_copy.py'
Oct 02 19:13:26 compute-0 sudo[227705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:26 compute-0 python3.9[227707]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759432405.0599163-349-103125563592289/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 19:13:26 compute-0 sudo[227705]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:27 compute-0 sudo[227871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvfmvgukwfysoecvuthdbkyadogijxnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432406.6708245-391-197129051836001/AnsiballZ_container_config_data.py'
Oct 02 19:13:27 compute-0 sudo[227871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:27 compute-0 podman[227831]: 2025-10-02 19:13:27.276028769 +0000 UTC m=+0.075162309 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 19:13:27 compute-0 podman[227832]: 2025-10-02 19:13:27.315397184 +0000 UTC m=+0.112804378 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:13:27 compute-0 python3.9[227879]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Oct 02 19:13:27 compute-0 sudo[227871]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:28 compute-0 sudo[228051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unzkbytajpmbnmeqeaywsoznvkrsnuob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432407.7110667-400-74779439127305/AnsiballZ_container_config_hash.py'
Oct 02 19:13:28 compute-0 sudo[228051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:28 compute-0 python3.9[228053]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:13:28 compute-0 sudo[228051]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:29 compute-0 sudo[228203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jipygtksedzjexqbdqytjfoooptdqsef ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432408.872935-410-209561361782615/AnsiballZ_edpm_container_manage.py'
Oct 02 19:13:29 compute-0 sudo[228203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:29 compute-0 podman[209015]: time="2025-10-02T19:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:13:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 24999 "" "Go-http-client/1.1"
Oct 02 19:13:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3834 "" "Go-http-client/1.1"
Oct 02 19:13:29 compute-0 python3[228205]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:13:30 compute-0 podman[228242]: 2025-10-02 19:13:30.158829007 +0000 UTC m=+0.075247800 container create 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:13:30 compute-0 podman[228242]: 2025-10-02 19:13:30.119539014 +0000 UTC m=+0.035957877 image pull 4e3fcb5b1fba62258ff06f167ae06a1ec1b5619d7c6c0d986039bf8e54f8eb69 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Oct 02 19:13:30 compute-0 python3[228205]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Oct 02 19:13:30 compute-0 sudo[228203]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:30 compute-0 sudo[228431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hunovknvmpbewqmcnyhnmbkzahxdgrju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432410.53342-418-208662104474099/AnsiballZ_stat.py'
Oct 02 19:13:30 compute-0 sudo[228431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:31 compute-0 python3.9[228433]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:13:31 compute-0 sudo[228431]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:31 compute-0 openstack_network_exporter[211160]: ERROR   19:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:13:31 compute-0 openstack_network_exporter[211160]: ERROR   19:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:13:31 compute-0 openstack_network_exporter[211160]: ERROR   19:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:13:31 compute-0 openstack_network_exporter[211160]: ERROR   19:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:13:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:13:31 compute-0 openstack_network_exporter[211160]: ERROR   19:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:13:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:13:31 compute-0 sudo[228585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nirihysjpsgaupckaniocssygvplnctd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432411.344145-427-277362826040554/AnsiballZ_file.py'
Oct 02 19:13:31 compute-0 sudo[228585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:31 compute-0 python3.9[228587]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:31 compute-0 sudo[228585]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:32 compute-0 sudo[228736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gydsqnatdwxxsbcwgxohcwqdvehtplmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432411.9547343-427-176961807497901/AnsiballZ_copy.py'
Oct 02 19:13:32 compute-0 sudo[228736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:32 compute-0 python3.9[228738]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432411.9547343-427-176961807497901/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:32 compute-0 sudo[228736]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:33 compute-0 sudo[228812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kasxjzzuknuiwzzuxfcjuceyrcihzusg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432411.9547343-427-176961807497901/AnsiballZ_systemd.py'
Oct 02 19:13:33 compute-0 sudo[228812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:33 compute-0 python3.9[228814]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:13:33 compute-0 systemd[1]: Reloading.
Oct 02 19:13:33 compute-0 systemd-rc-local-generator[228840]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:13:33 compute-0 systemd-sysv-generator[228843]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:13:33 compute-0 sudo[228812]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:34 compute-0 sudo[228923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfncydsdrelovjwhajoukmrcwxclgyhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432411.9547343-427-176961807497901/AnsiballZ_systemd.py'
Oct 02 19:13:34 compute-0 sudo[228923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:34 compute-0 python3.9[228925]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:13:34 compute-0 systemd[1]: Reloading.
Oct 02 19:13:34 compute-0 systemd-rc-local-generator[228953]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:13:34 compute-0 systemd-sysv-generator[228956]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:13:34 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Oct 02 19:13:34 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/750933489b254be3f5ca070b6a4cde620d3e95cc75d93cc8c28a72d456da7e53/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/750933489b254be3f5ca070b6a4cde620d3e95cc75d93cc8c28a72d456da7e53/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/750933489b254be3f5ca070b6a4cde620d3e95cc75d93cc8c28a72d456da7e53/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/750933489b254be3f5ca070b6a4cde620d3e95cc75d93cc8c28a72d456da7e53/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:35 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab.
Oct 02 19:13:35 compute-0 podman[228966]: 2025-10-02 19:13:35.110994357 +0000 UTC m=+0.323216353 container init 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251001)
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: + sudo -E kolla_set_configs
Oct 02 19:13:35 compute-0 sudo[228987]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:13:35 compute-0 sudo[228987]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:13:35 compute-0 sudo[228987]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:13:35 compute-0 podman[228966]: 2025-10-02 19:13:35.145241388 +0000 UTC m=+0.357463394 container start 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:13:35 compute-0 podman[228966]: ceilometer_agent_ipmi
Oct 02 19:13:35 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Validating config file
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Copying service configuration files
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: INFO:__main__:Writing out command to execute
Oct 02 19:13:35 compute-0 sudo[228987]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: ++ cat /run_command
Oct 02 19:13:35 compute-0 podman[228988]: 2025-10-02 19:13:35.226522406 +0000 UTC m=+0.072470677 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: + ARGS=
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: + sudo kolla_copy_cacerts
Oct 02 19:13:35 compute-0 sudo[228923]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:35 compute-0 systemd[1]: 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab-246d413885f44148.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:13:35 compute-0 systemd[1]: 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab-246d413885f44148.service: Failed with result 'exit-code'.
Oct 02 19:13:35 compute-0 sudo[229010]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:13:35 compute-0 sudo[229010]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:13:35 compute-0 sudo[229010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:13:35 compute-0 sudo[229010]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: + [[ ! -n '' ]]
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: + . kolla_extend_start
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: + umask 0022
Oct 02 19:13:35 compute-0 ceilometer_agent_ipmi[228981]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Oct 02 19:13:35 compute-0 podman[229137]: 2025-10-02 19:13:35.838400441 +0000 UTC m=+0.046715530 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:13:35 compute-0 sudo[229180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbcoekptfodffzbtaixcvhyzjqzijgvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432415.5575163-453-148623268973089/AnsiballZ_container_config_data.py'
Oct 02 19:13:35 compute-0 sudo[229180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:36 compute-0 python3.9[229189]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Oct 02 19:13:36 compute-0 sudo[229180]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.111 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.111 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.111 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.111 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.111 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.111 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.111 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.112 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.112 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.112 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.112 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.112 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.112 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.112 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.112 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.113 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.113 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.113 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.113 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.113 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.113 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.113 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.113 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.113 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.113 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.114 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.114 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.114 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.114 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.114 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.114 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.114 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.114 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.114 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.114 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.114 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.114 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.114 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.115 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.115 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.115 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.115 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.115 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.115 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.115 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.115 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.115 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.115 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.115 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.115 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.116 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.116 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.116 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.116 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.116 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.116 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.116 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.116 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.116 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.116 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.116 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.117 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.117 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.117 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.117 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.117 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.117 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.117 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.117 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.117 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.117 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.117 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.118 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.118 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.118 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.118 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.118 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.118 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.118 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.118 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.118 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.118 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.118 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.118 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.120 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.120 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.120 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.120 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.120 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.120 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.120 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.120 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.121 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.121 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.121 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.121 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.121 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.121 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.121 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.121 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.121 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.121 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.121 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.121 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.122 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.122 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.122 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.122 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.122 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.122 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.122 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.122 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.122 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.122 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.123 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.123 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.123 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.123 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.123 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.123 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.123 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.123 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.123 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.123 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.123 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.123 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.124 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.124 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.124 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.124 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.124 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.124 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.124 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.124 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.124 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.125 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.125 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.125 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.125 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.125 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.125 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.125 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.125 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.125 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.125 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.126 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.145 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.147 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.149 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.260 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmps5i6rec_/privsep.sock']
Oct 02 19:13:36 compute-0 sudo[229226]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmps5i6rec_/privsep.sock
Oct 02 19:13:36 compute-0 sudo[229226]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:13:36 compute-0 sudo[229226]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:13:36 compute-0 sudo[229347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nalimpbybqllpokhrghxxzigxnlwsltt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432416.2947862-462-126201068981898/AnsiballZ_container_config_hash.py'
Oct 02 19:13:36 compute-0 sudo[229347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:36 compute-0 sudo[229226]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.876 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.877 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmps5i6rec_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.758 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.762 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.765 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.765 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Oct 02 19:13:36 compute-0 python3.9[229349]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 19:13:36 compute-0 sudo[229347]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.976 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.977 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.979 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.979 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.979 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.980 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.980 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.980 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.980 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.981 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.981 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.981 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.981 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.987 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.987 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.987 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.988 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.988 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.988 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.988 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.989 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.989 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.989 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.989 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.989 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.990 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.990 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.990 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.990 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.991 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.991 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.991 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.991 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.992 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.992 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.992 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.992 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.993 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.993 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.993 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.993 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.993 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.993 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.994 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.994 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.994 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.994 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.994 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.994 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.994 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.995 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.995 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.995 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.995 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.995 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.995 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.995 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.996 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.996 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.996 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.996 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.996 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.996 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.996 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.997 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.997 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.997 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.997 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.997 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.997 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.997 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.998 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.998 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.998 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.998 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.998 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.998 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.999 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.999 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.999 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.999 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.999 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:36 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.999 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:36.999 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.000 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.000 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.000 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.000 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.000 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.000 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.001 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.001 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.001 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.001 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.001 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.001 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.002 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.002 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.002 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.002 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.002 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.002 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.002 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.003 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.003 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.003 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.003 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.003 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.003 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.004 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.004 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.004 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.004 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.004 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.004 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.004 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.005 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.005 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.005 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.005 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.006 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.006 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.006 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.006 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.006 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.007 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.007 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.007 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.007 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.007 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.007 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.008 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.008 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.008 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.008 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.009 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.009 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.009 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.009 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.009 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.009 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.010 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.010 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.010 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.010 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.010 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.010 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.010 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.010 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.011 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.011 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.011 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.011 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.011 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.011 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.011 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.011 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.012 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.012 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.012 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.012 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.012 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.012 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.012 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.012 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.013 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.013 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.013 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.013 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.013 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.013 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.013 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.013 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.014 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.014 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.014 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.014 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.014 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.014 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.014 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.014 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.014 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.015 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.015 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.015 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.015 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.015 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.015 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.015 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.015 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.016 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.016 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.016 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.016 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.016 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.016 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.016 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.017 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.017 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.017 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.017 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.017 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.017 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.017 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.017 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.018 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.018 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.018 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.018 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.018 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.018 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.018 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.018 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.019 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.019 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.019 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Oct 02 19:13:37 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:37.021 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Oct 02 19:13:37 compute-0 sudo[229504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfqenztzkjoxbqqepgvyujonthzhexsb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432417.2729247-472-94163923966844/AnsiballZ_edpm_container_manage.py'
Oct 02 19:13:37 compute-0 sudo[229504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:37 compute-0 python3[229506]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 19:13:38 compute-0 podman[229544]: 2025-10-02 19:13:38.132834634 +0000 UTC m=+0.077113160 container create c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, release=1214.1726694543, io.buildah.version=1.29.0, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, release-0.7.12=, maintainer=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:13:38 compute-0 podman[229544]: 2025-10-02 19:13:38.091886316 +0000 UTC m=+0.036164932 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Oct 02 19:13:38 compute-0 python3[229506]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Oct 02 19:13:38 compute-0 sudo[229504]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:38 compute-0 sudo[229732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pllbwpnezjekarcrwtrimemlofhtntnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432418.52755-480-6380464220804/AnsiballZ_stat.py'
Oct 02 19:13:38 compute-0 sudo[229732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:39 compute-0 python3.9[229734]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:13:39 compute-0 sudo[229732]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:39 compute-0 sudo[229886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yriozjcjcojrujtpdoxrwtunarwomtiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432419.4421856-489-177328528459554/AnsiballZ_file.py'
Oct 02 19:13:39 compute-0 sudo[229886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:40 compute-0 python3.9[229888]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:40 compute-0 sudo[229886]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:40 compute-0 PackageKit[132879]: daemon quit
Oct 02 19:13:40 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 19:13:40 compute-0 sudo[230037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykqsayqjsnhlparrwrhfkkgbmzbdhowi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432420.1776292-489-180702005710091/AnsiballZ_copy.py'
Oct 02 19:13:40 compute-0 sudo[230037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:41 compute-0 python3.9[230039]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759432420.1776292-489-180702005710091/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:41 compute-0 sudo[230037]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:41 compute-0 sudo[230113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tytlvrhjbbpvwnrqjlxvtqoubyismkpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432420.1776292-489-180702005710091/AnsiballZ_systemd.py'
Oct 02 19:13:41 compute-0 sudo[230113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:41 compute-0 python3.9[230115]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 19:13:41 compute-0 systemd[1]: Reloading.
Oct 02 19:13:41 compute-0 systemd-sysv-generator[230146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:13:41 compute-0 systemd-rc-local-generator[230143]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:13:42 compute-0 sudo[230113]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:42 compute-0 sudo[230224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuhurwszloiqxddrxelzinwvsanqbxhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432420.1776292-489-180702005710091/AnsiballZ_systemd.py'
Oct 02 19:13:42 compute-0 sudo[230224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:42 compute-0 python3.9[230226]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 19:13:42 compute-0 systemd[1]: Reloading.
Oct 02 19:13:42 compute-0 systemd-rc-local-generator[230250]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 19:13:42 compute-0 systemd-sysv-generator[230254]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 19:13:43 compute-0 systemd[1]: Starting kepler container...
Oct 02 19:13:43 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:13:43 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3.
Oct 02 19:13:43 compute-0 podman[230266]: 2025-10-02 19:13:43.480577419 +0000 UTC m=+0.324831526 container init c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler)
Oct 02 19:13:43 compute-0 kepler[230281]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 02 19:13:43 compute-0 podman[230266]: 2025-10-02 19:13:43.504612211 +0000 UTC m=+0.348866228 container start c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, name=ubi9, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30)
Oct 02 19:13:43 compute-0 kepler[230281]: I1002 19:13:43.510295       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Oct 02 19:13:43 compute-0 kepler[230281]: I1002 19:13:43.510496       1 config.go:293] using gCgroup ID in the BPF program: true
Oct 02 19:13:43 compute-0 kepler[230281]: I1002 19:13:43.510524       1 config.go:295] kernel version: 5.14
Oct 02 19:13:43 compute-0 kepler[230281]: I1002 19:13:43.511264       1 power.go:78] Unable to obtain power, use estimate method
Oct 02 19:13:43 compute-0 kepler[230281]: I1002 19:13:43.511294       1 redfish.go:169] failed to get redfish credential file path
Oct 02 19:13:43 compute-0 podman[230266]: kepler
Oct 02 19:13:43 compute-0 kepler[230281]: I1002 19:13:43.511699       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Oct 02 19:13:43 compute-0 kepler[230281]: I1002 19:13:43.511713       1 power.go:79] using none to obtain power
Oct 02 19:13:43 compute-0 kepler[230281]: E1002 19:13:43.511728       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Oct 02 19:13:43 compute-0 kepler[230281]: E1002 19:13:43.511750       1 exporter.go:154] failed to init GPU accelerators: no devices found
Oct 02 19:13:43 compute-0 kepler[230281]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 02 19:13:43 compute-0 kepler[230281]: I1002 19:13:43.513983       1 exporter.go:84] Number of CPUs: 8
Oct 02 19:13:43 compute-0 systemd[1]: Started kepler container.
Oct 02 19:13:43 compute-0 sudo[230224]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:43 compute-0 podman[230291]: 2025-10-02 19:13:43.57412809 +0000 UTC m=+0.059746653 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_id=edpm, distribution-scope=public, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, architecture=x86_64, container_name=kepler)
Oct 02 19:13:43 compute-0 systemd[1]: c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3-2bb56e30b539124c.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:13:43 compute-0 systemd[1]: c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3-2bb56e30b539124c.service: Failed with result 'exit-code'.
Oct 02 19:13:44 compute-0 sudo[230465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daizebkscckyriipfnsfonwzrgegxryt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432423.7227535-513-252687799772657/AnsiballZ_systemd.py'
Oct 02 19:13:44 compute-0 sudo[230465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.110579       1 watcher.go:83] Using in cluster k8s config
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.110939       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Oct 02 19:13:44 compute-0 kepler[230281]: E1002 19:13:44.111016       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.115606       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.115674       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.119971       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.120009       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.130135       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.130190       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.130367       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.138622       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.138657       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.138662       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.138668       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.138674       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.138686       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.138756       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.138782       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.138803       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.138820       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.138879       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Oct 02 19:13:44 compute-0 kepler[230281]: I1002 19:13:44.139246       1 exporter.go:208] Started Kepler in 629.18842ms
Oct 02 19:13:44 compute-0 python3.9[230467]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:13:44 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Oct 02 19:13:44 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:44.485 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Oct 02 19:13:44 compute-0 podman[230479]: 2025-10-02 19:13:44.511950628 +0000 UTC m=+0.116431493 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:13:44 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:44.588 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Oct 02 19:13:44 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:44.588 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Oct 02 19:13:44 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:44.589 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Oct 02 19:13:44 compute-0 ceilometer_agent_ipmi[228981]: 2025-10-02 19:13:44.605 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Oct 02 19:13:44 compute-0 systemd[1]: libpod-1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab.scope: Deactivated successfully.
Oct 02 19:13:44 compute-0 systemd[1]: libpod-1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab.scope: Consumed 2.203s CPU time.
Oct 02 19:13:44 compute-0 podman[230487]: 2025-10-02 19:13:44.79673922 +0000 UTC m=+0.362844016 container died 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:13:44 compute-0 systemd[1]: 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab-246d413885f44148.timer: Deactivated successfully.
Oct 02 19:13:44 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab.
Oct 02 19:13:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab-userdata-shm.mount: Deactivated successfully.
Oct 02 19:13:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-750933489b254be3f5ca070b6a4cde620d3e95cc75d93cc8c28a72d456da7e53-merged.mount: Deactivated successfully.
Oct 02 19:13:44 compute-0 podman[230487]: 2025-10-02 19:13:44.874953297 +0000 UTC m=+0.441058043 container cleanup 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:13:44 compute-0 podman[230487]: ceilometer_agent_ipmi
Oct 02 19:13:44 compute-0 podman[230529]: ceilometer_agent_ipmi
Oct 02 19:13:44 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Oct 02 19:13:44 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Oct 02 19:13:44 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Oct 02 19:13:45 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/750933489b254be3f5ca070b6a4cde620d3e95cc75d93cc8c28a72d456da7e53/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/750933489b254be3f5ca070b6a4cde620d3e95cc75d93cc8c28a72d456da7e53/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/750933489b254be3f5ca070b6a4cde620d3e95cc75d93cc8c28a72d456da7e53/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/750933489b254be3f5ca070b6a4cde620d3e95cc75d93cc8c28a72d456da7e53/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 02 19:13:45 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab.
Oct 02 19:13:45 compute-0 podman[230541]: 2025-10-02 19:13:45.154973433 +0000 UTC m=+0.184830553 container init 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: + sudo -E kolla_set_configs
Oct 02 19:13:45 compute-0 podman[230541]: 2025-10-02 19:13:45.190069396 +0000 UTC m=+0.219926536 container start 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:13:45 compute-0 podman[230541]: ceilometer_agent_ipmi
Oct 02 19:13:45 compute-0 sudo[230562]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 19:13:45 compute-0 sudo[230562]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:13:45 compute-0 sudo[230562]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:13:45 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 19:13:45 compute-0 sudo[230465]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Validating config file
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Copying service configuration files
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: INFO:__main__:Writing out command to execute
Oct 02 19:13:45 compute-0 sudo[230562]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: ++ cat /run_command
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: + ARGS=
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: + sudo kolla_copy_cacerts
Oct 02 19:13:45 compute-0 sudo[230583]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 19:13:45 compute-0 sudo[230583]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:13:45 compute-0 sudo[230583]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:13:45 compute-0 sudo[230583]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: + [[ ! -n '' ]]
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: + . kolla_extend_start
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: + umask 0022
Oct 02 19:13:45 compute-0 ceilometer_agent_ipmi[230556]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Oct 02 19:13:45 compute-0 podman[230563]: 2025-10-02 19:13:45.31834504 +0000 UTC m=+0.117757109 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 19:13:45 compute-0 systemd[1]: 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab-70c6c021b2456d10.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:13:45 compute-0 systemd[1]: 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab-70c6c021b2456d10.service: Failed with result 'exit-code'.
Oct 02 19:13:45 compute-0 sudo[230738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjchgbblpccshxymgnrwgkwbmxljakki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432425.4850104-521-38682189614756/AnsiballZ_systemd.py'
Oct 02 19:13:46 compute-0 sudo[230738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.200 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.200 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.200 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.200 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.200 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.200 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.200 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.201 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.201 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.201 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.201 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.201 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.201 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.201 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.201 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.201 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.201 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.202 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.202 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.202 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.202 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.202 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.202 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.202 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.202 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.202 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.202 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.202 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.202 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.203 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.203 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.203 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.203 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.203 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.203 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.203 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.203 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.204 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.204 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.204 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.204 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.204 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.204 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.204 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.204 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.204 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.204 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.204 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.204 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.205 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.205 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.205 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.205 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.205 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.205 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.205 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.205 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.205 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.205 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.205 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.206 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.206 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.206 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.206 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.206 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.206 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.206 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.206 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.206 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.206 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.206 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.208 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.208 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.208 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.208 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.209 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.209 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.209 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.209 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.209 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.209 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.209 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.210 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.210 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.210 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.210 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.210 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.210 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.210 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.210 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.210 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.210 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.210 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.210 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.211 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.211 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.211 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.211 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.212 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.212 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.212 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.212 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.212 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.212 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.213 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.213 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.213 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.213 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.213 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.213 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.214 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.239 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.241 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.243 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct 02 19:13:46 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.272 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp61g3oh8q/privsep.sock']
Oct 02 19:13:46 compute-0 sudo[230745]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmp61g3oh8q/privsep.sock
Oct 02 19:13:46 compute-0 sudo[230745]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 19:13:46 compute-0 sudo[230745]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 02 19:13:46 compute-0 python3.9[230740]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:13:46 compute-0 systemd[1]: Stopping kepler container...
Oct 02 19:13:46 compute-0 kepler[230281]: I1002 19:13:46.518479       1 exporter.go:218] Received shutdown signal
Oct 02 19:13:46 compute-0 kepler[230281]: I1002 19:13:46.518694       1 exporter.go:226] Exiting...
Oct 02 19:13:46 compute-0 systemd[1]: libpod-c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3.scope: Deactivated successfully.
Oct 02 19:13:46 compute-0 podman[230751]: 2025-10-02 19:13:46.71856149 +0000 UTC m=+0.266841200 container died c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, version=9.4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, vcs-type=git, distribution-scope=public, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Oct 02 19:13:46 compute-0 systemd[1]: c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3-2bb56e30b539124c.timer: Deactivated successfully.
Oct 02 19:13:46 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3.
Oct 02 19:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3-userdata-shm.mount: Deactivated successfully.
Oct 02 19:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ed0487f2a3ea2cd5b9e53ef8cf6bd2ad5dc6c96074a34c047a517f24d07042d-merged.mount: Deactivated successfully.
Oct 02 19:13:46 compute-0 podman[230751]: 2025-10-02 19:13:46.757963317 +0000 UTC m=+0.306243027 container cleanup c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4)
Oct 02 19:13:46 compute-0 podman[230751]: kepler
Oct 02 19:13:46 compute-0 podman[230774]: kepler
Oct 02 19:13:46 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Oct 02 19:13:46 compute-0 systemd[1]: Stopped kepler container.
Oct 02 19:13:46 compute-0 systemd[1]: Starting kepler container...
Oct 02 19:13:46 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:13:46 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3.
Oct 02 19:13:46 compute-0 podman[230788]: 2025-10-02 19:13:46.98582212 +0000 UTC m=+0.123028527 container init c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, vcs-type=git, name=ubi9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible)
Oct 02 19:13:47 compute-0 kepler[230805]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.016045       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.016157       1 config.go:293] using gCgroup ID in the BPF program: true
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.016182       1 config.go:295] kernel version: 5.14
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.016632       1 power.go:78] Unable to obtain power, use estimate method
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.016652       1 redfish.go:169] failed to get redfish credential file path
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.016985       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.016995       1 power.go:79] using none to obtain power
Oct 02 19:13:47 compute-0 kepler[230805]: E1002 19:13:47.017007       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Oct 02 19:13:47 compute-0 kepler[230805]: E1002 19:13:47.017027       1 exporter.go:154] failed to init GPU accelerators: no devices found
Oct 02 19:13:47 compute-0 kepler[230805]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.018622       1 exporter.go:84] Number of CPUs: 8
Oct 02 19:13:47 compute-0 sudo[230745]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:47 compute-0 podman[230788]: 2025-10-02 19:13:47.026297295 +0000 UTC m=+0.163503642 container start c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release-0.7.12=, maintainer=Red Hat, Inc., config_id=edpm, container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git)
Oct 02 19:13:47 compute-0 podman[230788]: kepler
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.029 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.031 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp61g3oh8q/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.902 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.907 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.910 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:46.910 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Oct 02 19:13:47 compute-0 systemd[1]: Started kepler container.
Oct 02 19:13:47 compute-0 sudo[230738]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.135 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.135 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.136 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.136 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.136 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.136 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.136 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.136 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.136 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.136 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.137 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.137 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.137 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.140 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.140 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.140 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.140 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.140 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.140 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.140 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.140 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.140 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.141 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.141 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.141 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.141 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.141 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.141 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.141 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.142 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.142 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.142 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.142 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.142 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.142 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 podman[230815]: 2025-10-02 19:13:47.143530649 +0000 UTC m=+0.111581626 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.component=ubi9-container, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, config_id=edpm, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, version=9.4, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.143 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.143 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.144 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.144 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.144 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.144 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.144 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.144 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.144 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.144 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.145 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.145 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.145 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.145 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.145 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.145 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.145 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.145 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.145 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.146 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.146 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.146 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.146 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.146 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.146 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.146 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.146 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.146 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.147 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.147 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.147 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.147 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 systemd[1]: c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3-30e3423cca3d7a1b.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.147 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.147 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 systemd[1]: c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3-30e3423cca3d7a1b.service: Failed with result 'exit-code'.
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.147 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.147 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.148 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.148 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.148 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.148 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.148 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.148 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.148 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.149 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.149 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.149 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.149 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.149 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.149 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.149 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.149 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.150 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.150 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.150 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.150 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.150 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.150 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.150 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.150 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.151 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.151 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.151 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.151 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.151 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.151 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.151 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.151 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.152 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.152 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.152 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.152 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.152 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.152 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.152 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.153 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.153 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.153 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.153 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.153 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.153 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.153 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.154 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.154 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.154 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.154 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.154 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.154 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.154 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.154 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.155 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.155 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.155 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.155 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.155 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.155 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.155 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.155 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.156 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.156 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.156 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.156 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.156 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.156 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.156 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.157 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.157 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.157 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.157 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.157 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.157 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.157 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.157 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.158 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.158 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.158 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.158 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.158 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.158 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.158 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.158 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.158 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.159 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.159 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.159 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.159 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.159 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.159 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.159 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.159 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.160 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.160 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.160 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.160 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.160 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.160 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.160 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.160 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.161 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.161 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.161 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.161 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.161 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.161 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.161 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.161 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.162 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.162 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.162 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.162 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.162 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.162 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.162 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.162 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.162 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.163 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.163 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.163 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.163 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.163 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.163 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.163 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.166 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.166 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.166 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.166 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Oct 02 19:13:47 compute-0 ceilometer_agent_ipmi[230556]: 2025-10-02 19:13:47.169 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Oct 02 19:13:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:13:47.446 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:13:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:13:47.447 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:13:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:13:47.447 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.526297       1 watcher.go:83] Using in cluster k8s config
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.526377       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Oct 02 19:13:47 compute-0 kepler[230805]: E1002 19:13:47.526525       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.533734       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.533812       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.541896       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.541953       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.556651       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.556713       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.556736       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.568767       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.568826       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.568836       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.568846       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.568857       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.568875       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.568994       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.569036       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.569068       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.569099       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.569263       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Oct 02 19:13:47 compute-0 kepler[230805]: I1002 19:13:47.569988       1 exporter.go:208] Started Kepler in 554.116316ms
Oct 02 19:13:47 compute-0 sudo[231002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxeeozadgyyollevqkfcztxzgffidakh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432427.2595105-529-22622880022561/AnsiballZ_find.py'
Oct 02 19:13:47 compute-0 sudo[231002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:47 compute-0 python3.9[231004]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 19:13:47 compute-0 sudo[231002]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:49 compute-0 sudo[231181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuffmdzpnqppvievmlawrqknlzdiqgmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432428.4044087-539-63399462214199/AnsiballZ_podman_container_info.py'
Oct 02 19:13:49 compute-0 sudo[231181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:49 compute-0 podman[231128]: 2025-10-02 19:13:49.086583379 +0000 UTC m=+0.100544785 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:13:49 compute-0 podman[231129]: 2025-10-02 19:13:49.101734237 +0000 UTC m=+0.096170960 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:13:49 compute-0 python3.9[231193]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Oct 02 19:13:49 compute-0 sudo[231181]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:50 compute-0 sudo[231358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmdmvbjxvifzjujjqxkansuzxxfwfcbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432429.6738708-547-192756001806188/AnsiballZ_podman_container_exec.py'
Oct 02 19:13:50 compute-0 sudo[231358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:50 compute-0 python3.9[231360]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:13:50 compute-0 systemd[1]: Started libpod-conmon-d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2.scope.
Oct 02 19:13:50 compute-0 podman[231361]: 2025-10-02 19:13:50.775555704 +0000 UTC m=+0.149669258 container exec d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:13:50 compute-0 podman[231361]: 2025-10-02 19:13:50.812676241 +0000 UTC m=+0.186789744 container exec_died d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 19:13:50 compute-0 sudo[231358]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:50 compute-0 systemd[1]: libpod-conmon-d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2.scope: Deactivated successfully.
Oct 02 19:13:51 compute-0 sudo[231538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aigjsqoipqofiftvnvwbssmfnbpipyme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432431.1207175-555-157701880250246/AnsiballZ_podman_container_exec.py'
Oct 02 19:13:51 compute-0 sudo[231538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:51 compute-0 python3.9[231540]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:13:52 compute-0 systemd[1]: Started libpod-conmon-d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2.scope.
Oct 02 19:13:52 compute-0 podman[231541]: 2025-10-02 19:13:52.0394668 +0000 UTC m=+0.126306303 container exec d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 19:13:52 compute-0 podman[231541]: 2025-10-02 19:13:52.074699327 +0000 UTC m=+0.161538870 container exec_died d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 19:13:52 compute-0 systemd[1]: libpod-conmon-d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2.scope: Deactivated successfully.
Oct 02 19:13:52 compute-0 sudo[231538]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:52 compute-0 sudo[231720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcnvvzfnfpfejhutbhdtgsgjgqfnbwkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432432.3880274-563-86159859500910/AnsiballZ_file.py'
Oct 02 19:13:52 compute-0 sudo[231720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:53 compute-0 python3.9[231722]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:53 compute-0 sudo[231720]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:53 compute-0 podman[231818]: 2025-10-02 19:13:53.750112515 +0000 UTC m=+0.103281928 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:13:53 compute-0 podman[231811]: 2025-10-02 19:13:53.753598527 +0000 UTC m=+0.108729961 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:13:53 compute-0 sudo[231913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpqibivlulcalnvieajqtcxdyftxzrfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432433.3576584-572-93647076031303/AnsiballZ_podman_container_info.py'
Oct 02 19:13:53 compute-0 sudo[231913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:54 compute-0 python3.9[231915]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Oct 02 19:13:54 compute-0 sudo[231913]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:55 compute-0 sudo[232076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jepfltmqgkxoklderzqvkaxetgdarzuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432434.5837781-580-251746468874813/AnsiballZ_podman_container_exec.py'
Oct 02 19:13:55 compute-0 sudo[232076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:55 compute-0 python3.9[232078]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:13:55 compute-0 systemd[1]: Started libpod-conmon-40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d.scope.
Oct 02 19:13:55 compute-0 podman[232079]: 2025-10-02 19:13:55.443080087 +0000 UTC m=+0.125454111 container exec 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:13:55 compute-0 podman[232079]: 2025-10-02 19:13:55.478776966 +0000 UTC m=+0.161150930 container exec_died 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:13:55 compute-0 sudo[232076]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:55 compute-0 systemd[1]: libpod-conmon-40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d.scope: Deactivated successfully.
Oct 02 19:13:56 compute-0 sudo[232256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsevvmbegpndsveopkwqqoebkumiikuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432435.8065603-588-257345097302397/AnsiballZ_podman_container_exec.py'
Oct 02 19:13:56 compute-0 sudo[232256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:56 compute-0 python3.9[232258]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:13:56 compute-0 systemd[1]: Started libpod-conmon-40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d.scope.
Oct 02 19:13:56 compute-0 podman[232259]: 2025-10-02 19:13:56.724652367 +0000 UTC m=+0.122022521 container exec 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:13:56 compute-0 podman[232259]: 2025-10-02 19:13:56.758464866 +0000 UTC m=+0.155835020 container exec_died 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 19:13:56 compute-0 sudo[232256]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:56 compute-0 systemd[1]: libpod-conmon-40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d.scope: Deactivated successfully.
Oct 02 19:13:57 compute-0 podman[232416]: 2025-10-02 19:13:57.55964903 +0000 UTC m=+0.092933695 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:13:57 compute-0 sudo[232472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpiwklrpoyaglnymbgsoctvvyvcqnqnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432437.062127-596-190287014651457/AnsiballZ_file.py'
Oct 02 19:13:57 compute-0 sudo[232472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:57 compute-0 podman[232417]: 2025-10-02 19:13:57.641888134 +0000 UTC m=+0.159970689 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 19:13:57 compute-0 python3.9[232478]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:13:57 compute-0 sudo[232472]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:58 compute-0 sudo[232635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsnbypzgcuqiiomlxnazgzmhchqlaurw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432438.2297833-605-21854294947645/AnsiballZ_podman_container_info.py'
Oct 02 19:13:58 compute-0 sudo[232635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:13:58 compute-0 python3.9[232637]: ansible-containers.podman.podman_container_info Invoked with name=['iscsid'] executable=podman
Oct 02 19:13:59 compute-0 sudo[232635]: pam_unix(sudo:session): session closed for user root
Oct 02 19:13:59 compute-0 podman[209015]: time="2025-10-02T19:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:13:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30750 "" "Go-http-client/1.1"
Oct 02 19:13:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4670 "" "Go-http-client/1.1"
Oct 02 19:13:59 compute-0 sudo[232799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffignxcmqodmebghrnnqchhjabeyrisk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432439.4672334-613-254989045379130/AnsiballZ_podman_container_exec.py'
Oct 02 19:13:59 compute-0 sudo[232799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:00 compute-0 python3.9[232801]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:00 compute-0 systemd[1]: Started libpod-conmon-e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd.scope.
Oct 02 19:14:00 compute-0 podman[232802]: 2025-10-02 19:14:00.329480507 +0000 UTC m=+0.162992898 container exec e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid)
Oct 02 19:14:00 compute-0 podman[232802]: 2025-10-02 19:14:00.365288529 +0000 UTC m=+0.198800880 container exec_died e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:14:00 compute-0 sudo[232799]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:00 compute-0 systemd[1]: libpod-conmon-e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd.scope: Deactivated successfully.
Oct 02 19:14:01 compute-0 sudo[232979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxlpbyszqtidkiyrnzqasmrboiltihif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432440.6677256-621-242503229317016/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:01 compute-0 sudo[232979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:01 compute-0 python3.9[232981]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:01 compute-0 openstack_network_exporter[211160]: ERROR   19:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:14:01 compute-0 openstack_network_exporter[211160]: ERROR   19:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:14:01 compute-0 openstack_network_exporter[211160]: ERROR   19:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:14:01 compute-0 openstack_network_exporter[211160]: ERROR   19:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:14:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:14:01 compute-0 openstack_network_exporter[211160]: ERROR   19:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:14:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:14:01 compute-0 systemd[1]: Started libpod-conmon-e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd.scope.
Oct 02 19:14:01 compute-0 podman[232982]: 2025-10-02 19:14:01.506496017 +0000 UTC m=+0.127966437 container exec e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:14:01 compute-0 podman[232982]: 2025-10-02 19:14:01.54194499 +0000 UTC m=+0.163415420 container exec_died e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct 02 19:14:01 compute-0 sudo[232979]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:01 compute-0 systemd[1]: libpod-conmon-e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd.scope: Deactivated successfully.
Oct 02 19:14:02 compute-0 sudo[233161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekgoqnwcryuglsmgzcyuaqkenvgnuzix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432441.919343-629-173835376150758/AnsiballZ_file.py'
Oct 02 19:14:02 compute-0 sudo[233161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:02 compute-0 python3.9[233163]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/iscsid recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:02 compute-0 sudo[233161]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:03 compute-0 sudo[233313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyvebqqsrfgxrkgornbquxutiktrzemy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432443.082769-638-9479656131160/AnsiballZ_podman_container_info.py'
Oct 02 19:14:03 compute-0 sudo[233313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:03 compute-0 python3.9[233315]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Oct 02 19:14:03 compute-0 sudo[233313]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:04 compute-0 sudo[233476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcshgfgvsyfoaizfxtsqrogosdxoxitv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432444.1551633-646-223894688704663/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:04 compute-0 sudo[233476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:04 compute-0 python3.9[233478]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:05 compute-0 systemd[1]: Started libpod-conmon-d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c.scope.
Oct 02 19:14:05 compute-0 podman[233479]: 2025-10-02 19:14:05.046446752 +0000 UTC m=+0.137908499 container exec d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:14:05 compute-0 podman[233479]: 2025-10-02 19:14:05.079016158 +0000 UTC m=+0.170477885 container exec_died d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:14:05 compute-0 systemd[1]: libpod-conmon-d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c.scope: Deactivated successfully.
Oct 02 19:14:05 compute-0 sudo[233476]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:05 compute-0 sudo[233657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odqtyhqzfpmxfzktdivhmffbafcglxjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432445.4064717-654-269641110258386/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:05 compute-0 sudo[233657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:06 compute-0 podman[233659]: 2025-10-02 19:14:06.008713983 +0000 UTC m=+0.089579427 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.057 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.057 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.091 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.091 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.091 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.092 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:14:06 compute-0 python3.9[233660]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:06 compute-0 systemd[1]: Started libpod-conmon-d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c.scope.
Oct 02 19:14:06 compute-0 podman[233684]: 2025-10-02 19:14:06.292626291 +0000 UTC m=+0.115839638 container exec d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 19:14:06 compute-0 podman[233684]: 2025-10-02 19:14:06.302169862 +0000 UTC m=+0.125383189 container exec_died d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 02 19:14:06 compute-0 sudo[233657]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:06 compute-0 systemd[1]: libpod-conmon-d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c.scope: Deactivated successfully.
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.436 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.436 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5709MB free_disk=72.56666946411133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.437 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.437 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.499 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.500 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.531 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.555 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.557 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:14:06 compute-0 nova_compute[194781]: 2025-10-02 19:14:06.558 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:14:07 compute-0 sudo[233864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdpvcliqighaguvjbjlaccxkhxlqkqwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432446.6190462-662-192017882996296/AnsiballZ_file.py'
Oct 02 19:14:07 compute-0 sudo[233864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:07 compute-0 python3.9[233866]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:07 compute-0 sudo[233864]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:07 compute-0 nova_compute[194781]: 2025-10-02 19:14:07.534 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:14:07 compute-0 nova_compute[194781]: 2025-10-02 19:14:07.534 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:14:07 compute-0 nova_compute[194781]: 2025-10-02 19:14:07.534 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:14:07 compute-0 nova_compute[194781]: 2025-10-02 19:14:07.535 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:14:07 compute-0 nova_compute[194781]: 2025-10-02 19:14:07.565 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:14:07 compute-0 nova_compute[194781]: 2025-10-02 19:14:07.565 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:14:07 compute-0 nova_compute[194781]: 2025-10-02 19:14:07.565 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:14:08 compute-0 nova_compute[194781]: 2025-10-02 19:14:08.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:14:08 compute-0 nova_compute[194781]: 2025-10-02 19:14:08.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:14:08 compute-0 nova_compute[194781]: 2025-10-02 19:14:08.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:14:08 compute-0 sudo[234016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtobbajmpbnejszhzqvzrcbrmgvqsgsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432447.7226608-671-122168924208551/AnsiballZ_podman_container_info.py'
Oct 02 19:14:08 compute-0 sudo[234016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:08 compute-0 python3.9[234018]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Oct 02 19:14:09 compute-0 nova_compute[194781]: 2025-10-02 19:14:09.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:14:09 compute-0 sudo[234016]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:10 compute-0 sudo[234179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtnzpmovlfatflqoylhhdmckiynrrgkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432449.8274071-679-226510945844158/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:10 compute-0 sudo[234179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:10 compute-0 python3.9[234181]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:10 compute-0 systemd[1]: Started libpod-conmon-29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b.scope.
Oct 02 19:14:10 compute-0 podman[234182]: 2025-10-02 19:14:10.697900172 +0000 UTC m=+0.131436385 container exec 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:14:10 compute-0 podman[234182]: 2025-10-02 19:14:10.73111427 +0000 UTC m=+0.164650523 container exec_died 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20250930)
Oct 02 19:14:10 compute-0 systemd[1]: libpod-conmon-29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b.scope: Deactivated successfully.
Oct 02 19:14:10 compute-0 sudo[234179]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:11 compute-0 sudo[234362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzxdthyqyvwoqtlzzfwxjequsiyatnid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432451.0269756-687-97632638802679/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:11 compute-0 sudo[234362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:11 compute-0 python3.9[234364]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:11 compute-0 systemd[1]: Started libpod-conmon-29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b.scope.
Oct 02 19:14:11 compute-0 podman[234365]: 2025-10-02 19:14:11.878862784 +0000 UTC m=+0.132103742 container exec 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 19:14:11 compute-0 podman[234365]: 2025-10-02 19:14:11.91610459 +0000 UTC m=+0.169345578 container exec_died 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4)
Oct 02 19:14:11 compute-0 systemd[1]: libpod-conmon-29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b.scope: Deactivated successfully.
Oct 02 19:14:11 compute-0 sudo[234362]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:12 compute-0 sudo[234543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loyjbpvjxnlxmulhwqweqrskmcxzvfsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432452.2726989-695-180220177792690/AnsiballZ_file.py'
Oct 02 19:14:12 compute-0 sudo[234543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:13 compute-0 python3.9[234545]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:13 compute-0 sudo[234543]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:13 compute-0 sudo[234695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqycqtigtpdngjgulxtsbbbywcgazrqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432453.369082-704-203132880399781/AnsiballZ_podman_container_info.py'
Oct 02 19:14:13 compute-0 sudo[234695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:14 compute-0 python3.9[234697]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Oct 02 19:14:14 compute-0 sudo[234695]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:14 compute-0 podman[234785]: 2025-10-02 19:14:14.720602697 +0000 UTC m=+0.094510088 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute)
Oct 02 19:14:14 compute-0 sudo[234878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmthxvrpbaxmxzkyrdncfvineydztges ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432454.4819918-712-52091095126134/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:14 compute-0 sudo[234878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:15 compute-0 python3.9[234880]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:15 compute-0 systemd[1]: Started libpod-conmon-61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4.scope.
Oct 02 19:14:15 compute-0 podman[234881]: 2025-10-02 19:14:15.241611736 +0000 UTC m=+0.127895390 container exec 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:14:15 compute-0 podman[234881]: 2025-10-02 19:14:15.27468481 +0000 UTC m=+0.160968484 container exec_died 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:14:15 compute-0 systemd[1]: libpod-conmon-61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4.scope: Deactivated successfully.
Oct 02 19:14:15 compute-0 sudo[234878]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:15 compute-0 podman[234910]: 2025-10-02 19:14:15.483953705 +0000 UTC m=+0.101203397 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:14:15 compute-0 systemd[1]: 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab-70c6c021b2456d10.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 19:14:15 compute-0 systemd[1]: 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab-70c6c021b2456d10.service: Failed with result 'exit-code'.
Oct 02 19:14:16 compute-0 sudo[235077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeugqfghytsvcaexrnwljbbinfobnaba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432455.618669-720-72602458580331/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:16 compute-0 sudo[235077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:16 compute-0 python3.9[235079]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:16 compute-0 systemd[1]: Started libpod-conmon-61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4.scope.
Oct 02 19:14:16 compute-0 podman[235080]: 2025-10-02 19:14:16.478266058 +0000 UTC m=+0.131903188 container exec 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:14:16 compute-0 podman[235080]: 2025-10-02 19:14:16.532379565 +0000 UTC m=+0.186016695 container exec_died 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:14:16 compute-0 systemd[1]: libpod-conmon-61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4.scope: Deactivated successfully.
Oct 02 19:14:16 compute-0 sudo[235077]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:17 compute-0 sudo[235276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlufsxbdiolgfeahokidxjtzdplovadu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432456.931999-728-167970572088507/AnsiballZ_file.py'
Oct 02 19:14:17 compute-0 sudo[235276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:17 compute-0 podman[235232]: 2025-10-02 19:14:17.452462153 +0000 UTC m=+0.074982376 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, name=ubi9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, architecture=x86_64)
Oct 02 19:14:17 compute-0 python3.9[235280]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:17 compute-0 sudo[235276]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:18 compute-0 sudo[235430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suwycdrqywiuknsomrtkbnazoacltard ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432457.9884384-737-156720668198662/AnsiballZ_podman_container_info.py'
Oct 02 19:14:18 compute-0 sudo[235430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:18 compute-0 python3.9[235432]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Oct 02 19:14:18 compute-0 sudo[235430]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:19 compute-0 podman[235551]: 2025-10-02 19:14:19.792337848 +0000 UTC m=+0.158512169 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64)
Oct 02 19:14:19 compute-0 podman[235558]: 2025-10-02 19:14:19.797801574 +0000 UTC m=+0.148083380 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:14:19 compute-0 sudo[235633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maiitizfchpbgprlxqxgafwixajidfmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432459.2699265-745-61323876721903/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:19 compute-0 sudo[235633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:20 compute-0 python3.9[235635]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:20 compute-0 systemd[1]: Started libpod-conmon-723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62.scope.
Oct 02 19:14:20 compute-0 podman[235636]: 2025-10-02 19:14:20.220407873 +0000 UTC m=+0.144762632 container exec 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:14:20 compute-0 podman[235636]: 2025-10-02 19:14:20.254022251 +0000 UTC m=+0.178377010 container exec_died 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:14:20 compute-0 systemd[1]: libpod-conmon-723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62.scope: Deactivated successfully.
Oct 02 19:14:20 compute-0 sudo[235633]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:21 compute-0 sudo[235815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiwnqrevxlffecvmwlfpnacnzatoyzom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432460.5985692-753-69643407419515/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:21 compute-0 sudo[235815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:21 compute-0 python3.9[235817]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:21 compute-0 systemd[1]: Started libpod-conmon-723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62.scope.
Oct 02 19:14:21 compute-0 podman[235818]: 2025-10-02 19:14:21.533746103 +0000 UTC m=+0.149312802 container exec 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:14:21 compute-0 podman[235818]: 2025-10-02 19:14:21.569700095 +0000 UTC m=+0.185266764 container exec_died 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:14:21 compute-0 systemd[1]: libpod-conmon-723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62.scope: Deactivated successfully.
Oct 02 19:14:21 compute-0 sudo[235815]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:22 compute-0 sudo[235995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yakedcrjjtrajglaxduqjgyynxdimwyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432461.9442272-761-145996520523276/AnsiballZ_file.py'
Oct 02 19:14:22 compute-0 sudo[235995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:22 compute-0 python3.9[235997]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:22 compute-0 sudo[235995]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:23 compute-0 sudo[236147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpopyyglsynwnadwumglpfspxxsthbib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432462.9707341-770-190795820375727/AnsiballZ_podman_container_info.py'
Oct 02 19:14:23 compute-0 sudo[236147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:23 compute-0 python3.9[236149]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Oct 02 19:14:23 compute-0 sudo[236147]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:24 compute-0 podman[236285]: 2025-10-02 19:14:24.634594744 +0000 UTC m=+0.099901852 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:14:24 compute-0 podman[236286]: 2025-10-02 19:14:24.638783196 +0000 UTC m=+0.103785096 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:14:24 compute-0 sudo[236351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuvumwtddsvsgnjpnwoohowizkhshznc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432464.1028612-778-155555276198587/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:24 compute-0 sudo[236351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:24 compute-0 python3.9[236354]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:24 compute-0 systemd[1]: Started libpod-conmon-a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1.scope.
Oct 02 19:14:25 compute-0 podman[236355]: 2025-10-02 19:14:25.002719885 +0000 UTC m=+0.119964489 container exec a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9-minimal, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1755695350, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, version=9.6, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Oct 02 19:14:25 compute-0 podman[236355]: 2025-10-02 19:14:25.037274678 +0000 UTC m=+0.154519192 container exec_died a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, config_id=edpm, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter)
Oct 02 19:14:25 compute-0 systemd[1]: libpod-conmon-a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1.scope: Deactivated successfully.
Oct 02 19:14:25 compute-0 sudo[236351]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:25 compute-0 sudo[236535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfbyampjlppweatjbvlksjbudecexvgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432465.3964393-786-200907146585725/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:25 compute-0 sudo[236535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:26 compute-0 python3.9[236537]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:26 compute-0 systemd[1]: Started libpod-conmon-a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1.scope.
Oct 02 19:14:26 compute-0 podman[236538]: 2025-10-02 19:14:26.278562084 +0000 UTC m=+0.135544175 container exec a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, config_id=edpm, version=9.6, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-type=git)
Oct 02 19:14:26 compute-0 podman[236538]: 2025-10-02 19:14:26.313777006 +0000 UTC m=+0.170759017 container exec_died a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:14:26 compute-0 systemd[1]: libpod-conmon-a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1.scope: Deactivated successfully.
Oct 02 19:14:26 compute-0 sudo[236535]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:27 compute-0 sudo[236719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfhdhrlxyvstpkpxeplmsnpduvvnttev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432466.6322298-794-199239739427602/AnsiballZ_file.py'
Oct 02 19:14:27 compute-0 sudo[236719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:27 compute-0 python3.9[236721]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:27 compute-0 sudo[236719]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:27 compute-0 podman[236749]: 2025-10-02 19:14:27.71284584 +0000 UTC m=+0.087648085 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 19:14:27 compute-0 podman[236799]: 2025-10-02 19:14:27.90473507 +0000 UTC m=+0.160001809 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:14:28 compute-0 sudo[236914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmkkhkwaqriudsnxhchvlhweyjdcgvnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432467.673598-803-213016714005856/AnsiballZ_podman_container_info.py'
Oct 02 19:14:28 compute-0 sudo[236914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:28 compute-0 python3.9[236916]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Oct 02 19:14:28 compute-0 sudo[236914]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:29 compute-0 sudo[237078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkddkescanxzcetwetqodsqdigddbtzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432468.7256787-811-215419592861114/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:29 compute-0 sudo[237078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:29 compute-0 python3.9[237080]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:29 compute-0 systemd[1]: Started libpod-conmon-1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab.scope.
Oct 02 19:14:29 compute-0 podman[237081]: 2025-10-02 19:14:29.621686174 +0000 UTC m=+0.113274879 container exec 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:14:29 compute-0 podman[237081]: 2025-10-02 19:14:29.654778029 +0000 UTC m=+0.146366694 container exec_died 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:14:29 compute-0 systemd[1]: libpod-conmon-1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab.scope: Deactivated successfully.
Oct 02 19:14:29 compute-0 sudo[237078]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:29 compute-0 podman[209015]: time="2025-10-02T19:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:14:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30749 "" "Go-http-client/1.1"
Oct 02 19:14:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4674 "" "Go-http-client/1.1"
Oct 02 19:14:30 compute-0 sudo[237256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypmievtrlmcvptyjfelsmrguvvgkodaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432469.9323113-819-91496381570693/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:30 compute-0 sudo[237256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:30 compute-0 python3.9[237258]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:30 compute-0 systemd[1]: Started libpod-conmon-1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab.scope.
Oct 02 19:14:30 compute-0 podman[237259]: 2025-10-02 19:14:30.745661294 +0000 UTC m=+0.125397743 container exec 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:14:30 compute-0 podman[237259]: 2025-10-02 19:14:30.780920897 +0000 UTC m=+0.160657326 container exec_died 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:14:30 compute-0 sudo[237256]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:30 compute-0 systemd[1]: libpod-conmon-1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab.scope: Deactivated successfully.
Oct 02 19:14:31 compute-0 openstack_network_exporter[211160]: ERROR   19:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:14:31 compute-0 openstack_network_exporter[211160]: ERROR   19:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:14:31 compute-0 openstack_network_exporter[211160]: ERROR   19:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:14:31 compute-0 openstack_network_exporter[211160]: ERROR   19:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:14:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:14:31 compute-0 openstack_network_exporter[211160]: ERROR   19:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:14:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:14:31 compute-0 sudo[237438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xccgmcgbpozyxwswspaahkflpxaxshis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432471.118768-827-45887526405976/AnsiballZ_file.py'
Oct 02 19:14:31 compute-0 sudo[237438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:31 compute-0 python3.9[237440]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:31 compute-0 sudo[237438]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:32 compute-0 sudo[237590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztqjsbfjimvkttpgthgelarotlgonanx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432472.078639-836-97116275092636/AnsiballZ_podman_container_info.py'
Oct 02 19:14:32 compute-0 sudo[237590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:32 compute-0 python3.9[237592]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Oct 02 19:14:32 compute-0 sudo[237590]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:33 compute-0 sudo[237754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyyoktlgjlhosfhhbhicahexupivrbcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432473.1743383-844-87040347035931/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:33 compute-0 sudo[237754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:33 compute-0 python3.9[237756]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:33 compute-0 systemd[1]: Started libpod-conmon-c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3.scope.
Oct 02 19:14:34 compute-0 podman[237757]: 2025-10-02 19:14:34.008426563 +0000 UTC m=+0.146438876 container exec c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, version=9.4, container_name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.component=ubi9-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct 02 19:14:34 compute-0 podman[237757]: 2025-10-02 19:14:34.040602783 +0000 UTC m=+0.178615076 container exec_died c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, container_name=kepler, vcs-type=git, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9)
Oct 02 19:14:34 compute-0 systemd[1]: libpod-conmon-c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3.scope: Deactivated successfully.
Oct 02 19:14:34 compute-0 sudo[237754]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:34 compute-0 sudo[237936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgnacnxitguyhgxnbfhhhjeabsoxvjkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432474.351962-852-233660669317845/AnsiballZ_podman_container_exec.py'
Oct 02 19:14:34 compute-0 sudo[237936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:34 compute-0 python3.9[237938]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 02 19:14:35 compute-0 systemd[1]: Started libpod-conmon-c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3.scope.
Oct 02 19:14:35 compute-0 podman[237939]: 2025-10-02 19:14:35.103653704 +0000 UTC m=+0.111504822 container exec c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, name=ubi9)
Oct 02 19:14:35 compute-0 podman[237939]: 2025-10-02 19:14:35.135072454 +0000 UTC m=+0.142923522 container exec_died c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, release=1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30)
Oct 02 19:14:35 compute-0 systemd[1]: libpod-conmon-c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3.scope: Deactivated successfully.
Oct 02 19:14:35 compute-0 sudo[237936]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:35 compute-0 sudo[238117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prscecmpuegzlnzutqbiagxofuiggqyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432475.4481373-860-221127632573062/AnsiballZ_file.py'
Oct 02 19:14:35 compute-0 sudo[238117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:36 compute-0 python3.9[238119]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:36 compute-0 sudo[238117]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:36 compute-0 podman[238216]: 2025-10-02 19:14:36.723300904 +0000 UTC m=+0.094345263 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:14:36 compute-0 sudo[238292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iokokdzyyjfygcrvrpqomgaxqbshdoly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432476.412909-869-231250708242970/AnsiballZ_file.py'
Oct 02 19:14:36 compute-0 sudo[238292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:37 compute-0 python3.9[238294]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:37 compute-0 sudo[238292]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:37 compute-0 sudo[238444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmiuocdocmbcviqrpbijgxzsztgsoxvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432477.4295075-877-58531405677107/AnsiballZ_stat.py'
Oct 02 19:14:37 compute-0 sudo[238444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:38 compute-0 python3.9[238446]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:14:38 compute-0 sudo[238444]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:38 compute-0 sudo[238567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdfhfjxvqrgkpqimxuyktgwomogntgni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432477.4295075-877-58531405677107/AnsiballZ_copy.py'
Oct 02 19:14:38 compute-0 sudo[238567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:38 compute-0 python3.9[238569]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759432477.4295075-877-58531405677107/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:38 compute-0 sudo[238567]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:39 compute-0 sudo[238719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnekpinggsprgahnrcdceennkfthtptr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432479.232237-893-40410713380118/AnsiballZ_file.py'
Oct 02 19:14:39 compute-0 sudo[238719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:40 compute-0 python3.9[238721]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:40 compute-0 sudo[238719]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:40 compute-0 sudo[238871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctsrkmsccnemxtkrebjdtxmyxhapzzgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432480.3581207-901-133139868354826/AnsiballZ_stat.py'
Oct 02 19:14:40 compute-0 sudo[238871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:41 compute-0 python3.9[238873]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:14:41 compute-0 sudo[238871]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:41 compute-0 sudo[238949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edxoailmoetpyksocrmojxgwjhtxcxea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432480.3581207-901-133139868354826/AnsiballZ_file.py'
Oct 02 19:14:41 compute-0 sudo[238949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:41 compute-0 python3.9[238951]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:41 compute-0 sudo[238949]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:42 compute-0 sudo[239101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiecstmhpqymfbchovsjhfpvazcuiwmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432482.1891832-913-204673330666053/AnsiballZ_stat.py'
Oct 02 19:14:42 compute-0 sudo[239101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:43 compute-0 python3.9[239103]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:14:43 compute-0 sudo[239101]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:43 compute-0 sudo[239179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjkuxmavzzpfosovsdcmmcflsxtocgtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432482.1891832-913-204673330666053/AnsiballZ_file.py'
Oct 02 19:14:43 compute-0 sudo[239179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:43 compute-0 python3.9[239181]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.31lono9t recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:43 compute-0 sudo[239179]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:44 compute-0 sudo[239331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coxzfzafnmpzwpjluowbeqvcbwyqttlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432483.9460635-925-136409554858425/AnsiballZ_stat.py'
Oct 02 19:14:44 compute-0 sudo[239331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:44 compute-0 python3.9[239333]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:14:44 compute-0 sudo[239331]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:45 compute-0 podman[239383]: 2025-10-02 19:14:45.061170754 +0000 UTC m=+0.109225551 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20250930)
Oct 02 19:14:45 compute-0 sudo[239427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcvoucywjxxdnzsaajvrktvqktcwqucz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432483.9460635-925-136409554858425/AnsiballZ_file.py'
Oct 02 19:14:45 compute-0 sudo[239427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:45 compute-0 python3.9[239430]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:45 compute-0 sudo[239427]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:45 compute-0 podman[239472]: 2025-10-02 19:14:45.785006146 +0000 UTC m=+0.143174239 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:14:46 compute-0 sudo[239600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exfrqmmudnyrcddqvsgrmetznvjzztcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432485.6490488-938-125321304758043/AnsiballZ_command.py'
Oct 02 19:14:46 compute-0 sudo[239600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:46 compute-0 python3.9[239602]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:14:46 compute-0 sudo[239600]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:14:47.448 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:14:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:14:47.448 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:14:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:14:47.449 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:14:47 compute-0 sudo[239753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzhznkdkfiufodzsoajnrtctqkchwhrr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432486.775374-946-143124234075222/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 19:14:47 compute-0 sudo[239753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:47 compute-0 podman[239755]: 2025-10-02 19:14:47.656316554 +0000 UTC m=+0.115463968 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, container_name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public)
Oct 02 19:14:47 compute-0 python3[239756]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 19:14:47 compute-0 sudo[239753]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:48 compute-0 sudo[239923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsnnoaxquotklxailafgunxqpqwjrvfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432487.9702356-954-135362529701365/AnsiballZ_stat.py'
Oct 02 19:14:48 compute-0 sudo[239923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:48 compute-0 python3.9[239925]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:14:48 compute-0 sudo[239923]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:49 compute-0 sudo[240001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkpzkbeqlnubbyvtdfqclscmzfpmllom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432487.9702356-954-135362529701365/AnsiballZ_file.py'
Oct 02 19:14:49 compute-0 sudo[240001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:49 compute-0 python3.9[240003]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:49 compute-0 sudo[240001]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:50 compute-0 sudo[240186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfqwdozvsfntomxyroctwktxwgwqiymp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432489.7868514-966-100499273237055/AnsiballZ_stat.py'
Oct 02 19:14:50 compute-0 podman[240128]: 2025-10-02 19:14:50.314699585 +0000 UTC m=+0.094075727 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, version=9.6, architecture=x86_64)
Oct 02 19:14:50 compute-0 sudo[240186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:50 compute-0 podman[240129]: 2025-10-02 19:14:50.384979574 +0000 UTC m=+0.147400132 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 19:14:50 compute-0 python3.9[240192]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:14:50 compute-0 sudo[240186]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:50 compute-0 sudo[240271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klmnksrtlopajjpvuleypcexcyulqqtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432489.7868514-966-100499273237055/AnsiballZ_file.py'
Oct 02 19:14:50 compute-0 sudo[240271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:51 compute-0 python3.9[240273]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:51 compute-0 sudo[240271]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:51 compute-0 sudo[240423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgylzjtncazrcfwnctynxhudulcvedah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432491.4472194-978-36258587502638/AnsiballZ_stat.py'
Oct 02 19:14:52 compute-0 sudo[240423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:52 compute-0 python3.9[240425]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:14:52 compute-0 sudo[240423]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:52 compute-0 sudo[240501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wosasydmgqfwngsanqdhujlunqprdchc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432491.4472194-978-36258587502638/AnsiballZ_file.py'
Oct 02 19:14:52 compute-0 sudo[240501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:52 compute-0 python3.9[240503]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:52 compute-0 sudo[240501]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:53 compute-0 sudo[240653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjejqklijkobwgsvwgatsbippoowaoif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432493.089831-990-259071580697286/AnsiballZ_stat.py'
Oct 02 19:14:53 compute-0 sudo[240653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:53 compute-0 python3.9[240655]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:14:54 compute-0 sudo[240653]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:54 compute-0 sudo[240731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsbqrfwjrvbbieytmgshdrwftbybnlvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432493.089831-990-259071580697286/AnsiballZ_file.py'
Oct 02 19:14:54 compute-0 sudo[240731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:54 compute-0 python3.9[240733]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:54 compute-0 sudo[240731]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:55 compute-0 podman[240857]: 2025-10-02 19:14:55.555848866 +0000 UTC m=+0.097974751 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:14:55 compute-0 sudo[240909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uirqbzadozlhmkppdmixejvhwdzadzoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432494.9281294-1002-280457438409503/AnsiballZ_stat.py'
Oct 02 19:14:55 compute-0 sudo[240909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:55 compute-0 podman[240858]: 2025-10-02 19:14:55.598369042 +0000 UTC m=+0.127116149 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:14:55 compute-0 python3.9[240924]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:14:55 compute-0 sudo[240909]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:56 compute-0 sudo[241048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbpqcshnfkaqpyrywaqcsifksrxxzwly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432494.9281294-1002-280457438409503/AnsiballZ_copy.py'
Oct 02 19:14:56 compute-0 sudo[241048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:56 compute-0 python3.9[241050]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759432494.9281294-1002-280457438409503/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:56 compute-0 sudo[241048]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:57 compute-0 sudo[241200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrjwxaotlujktqgffioxifdxfzqgaxkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432497.053323-1017-273676411622019/AnsiballZ_file.py'
Oct 02 19:14:57 compute-0 sudo[241200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:57 compute-0 python3.9[241202]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:14:57 compute-0 sudo[241200]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:58 compute-0 podman[241326]: 2025-10-02 19:14:58.591012909 +0000 UTC m=+0.093489920 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Oct 02 19:14:58 compute-0 sudo[241383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgisfepalsmppbacxczyncwtklizreqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432498.0669155-1025-64737876468136/AnsiballZ_command.py'
Oct 02 19:14:58 compute-0 sudo[241383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:14:58 compute-0 podman[241327]: 2025-10-02 19:14:58.665955543 +0000 UTC m=+0.157525763 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:14:58 compute-0 python3.9[241389]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:14:58 compute-0 sudo[241383]: pam_unix(sudo:session): session closed for user root
Oct 02 19:14:59 compute-0 podman[209015]: time="2025-10-02T19:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:14:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30748 "" "Go-http-client/1.1"
Oct 02 19:14:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4668 "" "Go-http-client/1.1"
Oct 02 19:14:59 compute-0 sudo[241549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsyhjkmlwfvyhfaccyfasvopiewjvcfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432499.1711264-1033-69316599189501/AnsiballZ_blockinfile.py'
Oct 02 19:14:59 compute-0 sudo[241549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:00 compute-0 python3.9[241551]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:15:00 compute-0 sudo[241549]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:00 compute-0 sudo[241701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkdjgktdtrgfvxjzvnhnbbdyfoalevpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432500.3954105-1042-96457062783556/AnsiballZ_command.py'
Oct 02 19:15:00 compute-0 sudo[241701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:01 compute-0 python3.9[241703]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:15:01 compute-0 sudo[241701]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:01 compute-0 openstack_network_exporter[211160]: ERROR   19:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:15:01 compute-0 openstack_network_exporter[211160]: ERROR   19:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:15:01 compute-0 openstack_network_exporter[211160]: ERROR   19:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:15:01 compute-0 openstack_network_exporter[211160]: ERROR   19:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:15:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:15:01 compute-0 openstack_network_exporter[211160]: ERROR   19:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:15:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:15:01 compute-0 sudo[241854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spiklhbodhugcmxcthwlwicmrxabpkta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432501.3023896-1050-101216536333784/AnsiballZ_stat.py'
Oct 02 19:15:01 compute-0 sudo[241854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:01 compute-0 python3.9[241856]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:15:01 compute-0 sudo[241854]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:02 compute-0 sudo[242008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eklawgnncylzbquzszkbsgvqamlgdfdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432502.1795208-1058-83171825160511/AnsiballZ_command.py'
Oct 02 19:15:02 compute-0 sudo[242008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:02 compute-0 python3.9[242010]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:15:02 compute-0 sudo[242008]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:03 compute-0 sudo[242163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwjktkddlqdcjvlcdmcjrzgrnwwzffnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432503.146032-1066-227728929658214/AnsiballZ_file.py'
Oct 02 19:15:03 compute-0 sudo[242163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:03 compute-0 python3.9[242165]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:15:03 compute-0 sudo[242163]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:04 compute-0 nova_compute[194781]: 2025-10-02 19:15:04.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:15:04 compute-0 nova_compute[194781]: 2025-10-02 19:15:04.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 19:15:04 compute-0 nova_compute[194781]: 2025-10-02 19:15:04.078 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 19:15:04 compute-0 nova_compute[194781]: 2025-10-02 19:15:04.080 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:15:04 compute-0 nova_compute[194781]: 2025-10-02 19:15:04.080 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 19:15:04 compute-0 nova_compute[194781]: 2025-10-02 19:15:04.095 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:15:04 compute-0 sshd-session[221000]: Connection closed by 192.168.122.30 port 46598
Oct 02 19:15:04 compute-0 sshd-session[220997]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:15:04 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Oct 02 19:15:04 compute-0 systemd[1]: session-28.scope: Consumed 1min 49.059s CPU time.
Oct 02 19:15:04 compute-0 systemd-logind[798]: Session 28 logged out. Waiting for processes to exit.
Oct 02 19:15:04 compute-0 systemd-logind[798]: Removed session 28.
Oct 02 19:15:06 compute-0 nova_compute[194781]: 2025-10-02 19:15:06.107 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:15:06 compute-0 nova_compute[194781]: 2025-10-02 19:15:06.109 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:15:06 compute-0 nova_compute[194781]: 2025-10-02 19:15:06.148 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:15:06 compute-0 nova_compute[194781]: 2025-10-02 19:15:06.149 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:15:06 compute-0 nova_compute[194781]: 2025-10-02 19:15:06.150 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:15:06 compute-0 nova_compute[194781]: 2025-10-02 19:15:06.150 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:15:06 compute-0 nova_compute[194781]: 2025-10-02 19:15:06.599 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:15:06 compute-0 nova_compute[194781]: 2025-10-02 19:15:06.600 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5672MB free_disk=72.56499862670898GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:15:06 compute-0 nova_compute[194781]: 2025-10-02 19:15:06.600 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:15:06 compute-0 nova_compute[194781]: 2025-10-02 19:15:06.600 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:15:06 compute-0 nova_compute[194781]: 2025-10-02 19:15:06.769 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:15:06 compute-0 nova_compute[194781]: 2025-10-02 19:15:06.769 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:15:06 compute-0 nova_compute[194781]: 2025-10-02 19:15:06.900 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing inventories for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 19:15:07 compute-0 nova_compute[194781]: 2025-10-02 19:15:07.006 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating ProviderTree inventory for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 19:15:07 compute-0 nova_compute[194781]: 2025-10-02 19:15:07.007 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:15:07 compute-0 nova_compute[194781]: 2025-10-02 19:15:07.024 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing aggregate associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 19:15:07 compute-0 nova_compute[194781]: 2025-10-02 19:15:07.055 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing trait associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,HW_CPU_X86_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 19:15:07 compute-0 nova_compute[194781]: 2025-10-02 19:15:07.084 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:15:07 compute-0 nova_compute[194781]: 2025-10-02 19:15:07.101 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:15:07 compute-0 nova_compute[194781]: 2025-10-02 19:15:07.102 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:15:07 compute-0 nova_compute[194781]: 2025-10-02 19:15:07.102 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:15:07 compute-0 podman[242190]: 2025-10-02 19:15:07.718412589 +0000 UTC m=+0.089574696 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:15:08 compute-0 nova_compute[194781]: 2025-10-02 19:15:08.027 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:15:08 compute-0 nova_compute[194781]: 2025-10-02 19:15:08.028 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:15:08 compute-0 nova_compute[194781]: 2025-10-02 19:15:08.028 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:15:08 compute-0 nova_compute[194781]: 2025-10-02 19:15:08.028 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:15:08 compute-0 nova_compute[194781]: 2025-10-02 19:15:08.048 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:15:08 compute-0 nova_compute[194781]: 2025-10-02 19:15:08.049 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:15:08 compute-0 nova_compute[194781]: 2025-10-02 19:15:08.050 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:15:09 compute-0 nova_compute[194781]: 2025-10-02 19:15:09.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:15:10 compute-0 nova_compute[194781]: 2025-10-02 19:15:10.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:15:10 compute-0 nova_compute[194781]: 2025-10-02 19:15:10.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:15:10 compute-0 nova_compute[194781]: 2025-10-02 19:15:10.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:15:10 compute-0 sshd-session[242212]: Accepted publickey for zuul from 192.168.122.30 port 60064 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 19:15:10 compute-0 systemd-logind[798]: New session 29 of user zuul.
Oct 02 19:15:10 compute-0 systemd[1]: Started Session 29 of User zuul.
Oct 02 19:15:10 compute-0 sshd-session[242212]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:15:12 compute-0 python3.9[242365]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.936 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.937 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.937 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.938 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.940 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.941 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.942 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.942 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fce9c10>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.944 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.945 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.945 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.946 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.946 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.946 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.946 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.947 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.947 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.947 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.948 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.948 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.949 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.949 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.950 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.950 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.951 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.951 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.952 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.952 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.953 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.953 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.953 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.954 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.954 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.955 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.955 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.955 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.955 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.955 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.955 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.955 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.955 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.955 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:15:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:15:13 compute-0 sudo[242520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrcrbdsrhoqoihjocyafdwpwpjnwbmsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432512.8056347-34-110223265389652/AnsiballZ_systemd.py'
Oct 02 19:15:13 compute-0 sudo[242520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:14 compute-0 python3.9[242522]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Oct 02 19:15:14 compute-0 sudo[242520]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:14 compute-0 sudo[242673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjvaygqjymlnhzkvywbdawudydmujumq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432514.4732149-42-174357741600489/AnsiballZ_setup.py'
Oct 02 19:15:14 compute-0 sudo[242673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:15 compute-0 python3.9[242675]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 19:15:15 compute-0 sudo[242673]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:15 compute-0 podman[242684]: 2025-10-02 19:15:15.774856946 +0000 UTC m=+0.131523755 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:15:16 compute-0 sudo[242791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sffmgycfwnjhdwzsbrvuhglecmrfonww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432514.4732149-42-174357741600489/AnsiballZ_dnf.py'
Oct 02 19:15:16 compute-0 sudo[242791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:16 compute-0 podman[242750]: 2025-10-02 19:15:16.276543983 +0000 UTC m=+0.131786022 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Oct 02 19:15:16 compute-0 python3.9[242796]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 19:15:18 compute-0 podman[242803]: 2025-10-02 19:15:18.729693057 +0000 UTC m=+0.106397368 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, build-date=2024-09-18T21:23:30, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, version=9.4, release=1214.1726694543, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct 02 19:15:18 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 02 19:15:18 compute-0 PackageKit[242823]: daemon start
Oct 02 19:15:18 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 02 19:15:19 compute-0 sudo[242791]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:20 compute-0 sudo[242977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkaiogwmblkxrqynbmowxebrxtnjqpmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432519.3205593-54-214834961407518/AnsiballZ_stat.py'
Oct 02 19:15:20 compute-0 sudo[242977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:20 compute-0 python3.9[242979]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:15:20 compute-0 sudo[242977]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:20 compute-0 podman[243027]: 2025-10-02 19:15:20.710343833 +0000 UTC m=+0.077573300 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:15:20 compute-0 podman[243028]: 2025-10-02 19:15:20.725384824 +0000 UTC m=+0.087741168 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct 02 19:15:21 compute-0 sudo[243139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erfcraieuxznopyvaycidjplifpoclfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432519.3205593-54-214834961407518/AnsiballZ_copy.py'
Oct 02 19:15:21 compute-0 sudo[243139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:21 compute-0 python3.9[243141]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759432519.3205593-54-214834961407518/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:15:21 compute-0 sudo[243139]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:22 compute-0 sudo[243291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpeaqbibpuvbwpnvfisxpsxqenerjdwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432521.5298316-69-112100787737780/AnsiballZ_file.py'
Oct 02 19:15:22 compute-0 sudo[243291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:22 compute-0 python3.9[243293]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:15:22 compute-0 sudo[243291]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:23 compute-0 sudo[243443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mniwcahhhnwjuukoceluwkfltpqrwyzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432522.664802-77-14809970081144/AnsiballZ_stat.py'
Oct 02 19:15:23 compute-0 sudo[243443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:23 compute-0 python3.9[243445]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 19:15:23 compute-0 sudo[243443]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:23 compute-0 sudo[243566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-firajsjnvzjwcurejgbluiyjejltsixp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432522.664802-77-14809970081144/AnsiballZ_copy.py'
Oct 02 19:15:23 compute-0 sudo[243566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:24 compute-0 python3.9[243568]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759432522.664802-77-14809970081144/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 19:15:24 compute-0 sudo[243566]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:24 compute-0 sudo[243718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwdvwhhwkuubenkfcwlnsvahojhguibj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759432524.3447907-92-259475450665709/AnsiballZ_systemd.py'
Oct 02 19:15:24 compute-0 sudo[243718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:15:25 compute-0 python3.9[243720]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 19:15:25 compute-0 systemd[1]: Stopping System Logging Service...
Oct 02 19:15:25 compute-0 rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] exiting on signal 15.
Oct 02 19:15:25 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Oct 02 19:15:25 compute-0 systemd[1]: Stopped System Logging Service.
Oct 02 19:15:25 compute-0 systemd[1]: rsyslog.service: Consumed 4.164s CPU time, 10.2M memory peak, read 0B from disk, written 6.7M to disk.
Oct 02 19:15:25 compute-0 systemd[1]: Starting System Logging Service...
Oct 02 19:15:25 compute-0 podman[243725]: 2025-10-02 19:15:25.727895851 +0000 UTC m=+0.082599197 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 19:15:25 compute-0 podman[243724]: 2025-10-02 19:15:25.742090488 +0000 UTC m=+0.115703491 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:15:25 compute-0 rsyslogd[243731]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="243731" x-info="https://www.rsyslog.com"] start
Oct 02 19:15:25 compute-0 systemd[1]: Started System Logging Service.
Oct 02 19:15:25 compute-0 rsyslogd[243731]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:15:25 compute-0 rsyslogd[243731]: Warning: Certificate file is not set [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Oct 02 19:15:25 compute-0 rsyslogd[243731]: Warning: Key file is not set [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Oct 02 19:15:25 compute-0 rsyslogd[243731]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2506.0-2.el9]
Oct 02 19:15:25 compute-0 sudo[243718]: pam_unix(sudo:session): session closed for user root
Oct 02 19:15:25 compute-0 rsyslogd[243731]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2506.0-2.el9]
Oct 02 19:15:26 compute-0 sshd-session[242215]: Connection closed by 192.168.122.30 port 60064
Oct 02 19:15:26 compute-0 sshd-session[242212]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:15:26 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Oct 02 19:15:26 compute-0 systemd[1]: session-29.scope: Consumed 11.799s CPU time.
Oct 02 19:15:26 compute-0 systemd-logind[798]: Session 29 logged out. Waiting for processes to exit.
Oct 02 19:15:26 compute-0 systemd-logind[798]: Removed session 29.
Oct 02 19:15:29 compute-0 podman[243795]: 2025-10-02 19:15:29.695074701 +0000 UTC m=+0.066866458 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:15:29 compute-0 podman[209015]: time="2025-10-02T19:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:15:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30748 "" "Go-http-client/1.1"
Oct 02 19:15:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4671 "" "Go-http-client/1.1"
Oct 02 19:15:29 compute-0 podman[243796]: 2025-10-02 19:15:29.784466633 +0000 UTC m=+0.150473352 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:15:31 compute-0 openstack_network_exporter[211160]: ERROR   19:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:15:31 compute-0 openstack_network_exporter[211160]: ERROR   19:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:15:31 compute-0 openstack_network_exporter[211160]: ERROR   19:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:15:31 compute-0 openstack_network_exporter[211160]: ERROR   19:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:15:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:15:31 compute-0 openstack_network_exporter[211160]: ERROR   19:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:15:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:15:38 compute-0 podman[243838]: 2025-10-02 19:15:38.704899195 +0000 UTC m=+0.080293145 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:15:46 compute-0 podman[243862]: 2025-10-02 19:15:46.709698681 +0000 UTC m=+0.082912317 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:15:46 compute-0 podman[243861]: 2025-10-02 19:15:46.709738782 +0000 UTC m=+0.077568671 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi)
Oct 02 19:15:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:15:47.449 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:15:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:15:47.450 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:15:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:15:47.450 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:15:49 compute-0 podman[243899]: 2025-10-02 19:15:49.764699479 +0000 UTC m=+0.121041418 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, release=1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:15:51 compute-0 podman[243919]: 2025-10-02 19:15:51.72325607 +0000 UTC m=+0.101368070 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, vendor=Red Hat, Inc.)
Oct 02 19:15:51 compute-0 podman[243920]: 2025-10-02 19:15:51.751583454 +0000 UTC m=+0.114170220 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 19:15:56 compute-0 podman[243961]: 2025-10-02 19:15:56.717728138 +0000 UTC m=+0.087095620 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:15:56 compute-0 podman[243962]: 2025-10-02 19:15:56.743527033 +0000 UTC m=+0.120358819 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 19:15:59 compute-0 podman[209015]: time="2025-10-02T19:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:15:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30748 "" "Go-http-client/1.1"
Oct 02 19:15:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4669 "" "Go-http-client/1.1"
Oct 02 19:16:00 compute-0 podman[244004]: 2025-10-02 19:16:00.681922467 +0000 UTC m=+0.061366848 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:16:00 compute-0 podman[244005]: 2025-10-02 19:16:00.732351785 +0000 UTC m=+0.109372759 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 19:16:01 compute-0 openstack_network_exporter[211160]: ERROR   19:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:16:01 compute-0 openstack_network_exporter[211160]: ERROR   19:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:16:01 compute-0 openstack_network_exporter[211160]: ERROR   19:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:16:01 compute-0 openstack_network_exporter[211160]: ERROR   19:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:16:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:16:01 compute-0 openstack_network_exporter[211160]: ERROR   19:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:16:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:16:03 compute-0 sshd-session[244048]: Accepted publickey for zuul from 38.102.83.227 port 47642 ssh2: RSA SHA256:Cqypmgs6gPK5am/EoWoj7JixM3d03JX7hfQ1lfNOky8
Oct 02 19:16:03 compute-0 systemd-logind[798]: New session 30 of user zuul.
Oct 02 19:16:03 compute-0 systemd[1]: Started Session 30 of User zuul.
Oct 02 19:16:03 compute-0 sshd-session[244048]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:16:04 compute-0 python3[244225]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:16:06 compute-0 sudo[244446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shaknnstwikxjzteooxsirfacxrgtpqs ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432565.9269187-33358-134906566232347/AnsiballZ_command.py'
Oct 02 19:16:06 compute-0 sudo[244446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:06 compute-0 python3[244448]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:16:06 compute-0 sudo[244446]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:07 compute-0 sudo[244599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apwxglvwyyeclipslymjcorpyiqrqsgq ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432567.027679-33369-66626738739725/AnsiballZ_command.py'
Oct 02 19:16:07 compute-0 sudo[244599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:07 compute-0 python3[244601]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.032 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.079 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.080 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.081 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.082 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.481 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.482 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5671MB free_disk=72.5629653930664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.483 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.483 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.565 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.565 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.593 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.608 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.609 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:16:08 compute-0 nova_compute[194781]: 2025-10-02 19:16:08.610 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:16:09 compute-0 sudo[244599]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:09 compute-0 nova_compute[194781]: 2025-10-02 19:16:09.607 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:16:09 compute-0 nova_compute[194781]: 2025-10-02 19:16:09.630 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:16:09 compute-0 nova_compute[194781]: 2025-10-02 19:16:09.631 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:16:09 compute-0 nova_compute[194781]: 2025-10-02 19:16:09.632 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:16:09 compute-0 nova_compute[194781]: 2025-10-02 19:16:09.649 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:16:09 compute-0 podman[244628]: 2025-10-02 19:16:09.75687211 +0000 UTC m=+0.111217799 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:16:10 compute-0 nova_compute[194781]: 2025-10-02 19:16:10.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:16:10 compute-0 nova_compute[194781]: 2025-10-02 19:16:10.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:16:10 compute-0 nova_compute[194781]: 2025-10-02 19:16:10.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:16:10 compute-0 python3[244777]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 19:16:11 compute-0 nova_compute[194781]: 2025-10-02 19:16:11.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:16:11 compute-0 sudo[244928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvqxjtnkcnrrfsljhmctljkcioholdhx ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432571.1538887-33413-92839223012459/AnsiballZ_setup.py'
Oct 02 19:16:11 compute-0 sudo[244928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:11 compute-0 python3[244930]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 19:16:12 compute-0 nova_compute[194781]: 2025-10-02 19:16:12.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:16:12 compute-0 nova_compute[194781]: 2025-10-02 19:16:12.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:16:12 compute-0 sudo[244928]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:14 compute-0 sudo[245153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxkcfwfafzwoenhihcufjptmipnqclxj ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432573.6145694-33442-278498489242287/AnsiballZ_command.py'
Oct 02 19:16:14 compute-0 sudo[245153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:14 compute-0 python3[245155]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:16:14 compute-0 sudo[245153]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:15 compute-0 sudo[245317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdzdlucgjjmpfeokgpdegzasxltcrdtb ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759432574.7848089-33459-90227843769571/AnsiballZ_command.py'
Oct 02 19:16:15 compute-0 sudo[245317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:16:15 compute-0 python3[245319]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:16:15 compute-0 sudo[245317]: pam_unix(sudo:session): session closed for user root
Oct 02 19:16:16 compute-0 unix_chkpwd[245359]: password check failed for user (root)
Oct 02 19:16:16 compute-0 sshd-session[245357]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:16:17 compute-0 podman[245361]: 2025-10-02 19:16:17.709589493 +0000 UTC m=+0.080098889 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct 02 19:16:17 compute-0 podman[245360]: 2025-10-02 19:16:17.721491009 +0000 UTC m=+0.094599206 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct 02 19:16:18 compute-0 sshd-session[245357]: Failed password for root from 91.224.92.108 port 18366 ssh2
Oct 02 19:16:20 compute-0 unix_chkpwd[245399]: password check failed for user (root)
Oct 02 19:16:20 compute-0 podman[245400]: 2025-10-02 19:16:20.73890097 +0000 UTC m=+0.108750472 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct 02 19:16:22 compute-0 sshd-session[245357]: Failed password for root from 91.224.92.108 port 18366 ssh2
Oct 02 19:16:22 compute-0 podman[245421]: 2025-10-02 19:16:22.744752964 +0000 UTC m=+0.107658253 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:16:22 compute-0 podman[245420]: 2025-10-02 19:16:22.749807432 +0000 UTC m=+0.128225625 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc.)
Oct 02 19:16:23 compute-0 unix_chkpwd[245460]: password check failed for user (root)
Oct 02 19:16:24 compute-0 sshd-session[245357]: Failed password for root from 91.224.92.108 port 18366 ssh2
Oct 02 19:16:26 compute-0 sshd-session[245357]: Received disconnect from 91.224.92.108 port 18366:11:  [preauth]
Oct 02 19:16:26 compute-0 sshd-session[245357]: Disconnected from authenticating user root 91.224.92.108 port 18366 [preauth]
Oct 02 19:16:26 compute-0 sshd-session[245357]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:16:27 compute-0 unix_chkpwd[245463]: password check failed for user (root)
Oct 02 19:16:27 compute-0 sshd-session[245461]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:16:27 compute-0 podman[245464]: 2025-10-02 19:16:27.746838179 +0000 UTC m=+0.115012163 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:16:27 compute-0 podman[245465]: 2025-10-02 19:16:27.750576491 +0000 UTC m=+0.125680975 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 19:16:29 compute-0 sshd-session[245461]: Failed password for root from 91.224.92.108 port 52238 ssh2
Oct 02 19:16:29 compute-0 podman[209015]: time="2025-10-02T19:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:16:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30748 "" "Go-http-client/1.1"
Oct 02 19:16:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4666 "" "Go-http-client/1.1"
Oct 02 19:16:30 compute-0 unix_chkpwd[245508]: password check failed for user (root)
Oct 02 19:16:31 compute-0 openstack_network_exporter[211160]: ERROR   19:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:16:31 compute-0 openstack_network_exporter[211160]: ERROR   19:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:16:31 compute-0 openstack_network_exporter[211160]: ERROR   19:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:16:31 compute-0 openstack_network_exporter[211160]: ERROR   19:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:16:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:16:31 compute-0 openstack_network_exporter[211160]: ERROR   19:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:16:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:16:31 compute-0 podman[245509]: 2025-10-02 19:16:31.72352613 +0000 UTC m=+0.094996317 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 19:16:31 compute-0 podman[245510]: 2025-10-02 19:16:31.844727851 +0000 UTC m=+0.208974290 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 19:16:33 compute-0 sshd-session[245461]: Failed password for root from 91.224.92.108 port 52238 ssh2
Oct 02 19:16:33 compute-0 unix_chkpwd[245552]: password check failed for user (root)
Oct 02 19:16:35 compute-0 sshd-session[245461]: Failed password for root from 91.224.92.108 port 52238 ssh2
Oct 02 19:16:35 compute-0 sshd-session[245461]: Received disconnect from 91.224.92.108 port 52238:11:  [preauth]
Oct 02 19:16:35 compute-0 sshd-session[245461]: Disconnected from authenticating user root 91.224.92.108 port 52238 [preauth]
Oct 02 19:16:35 compute-0 sshd-session[245461]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:16:36 compute-0 unix_chkpwd[245555]: password check failed for user (root)
Oct 02 19:16:36 compute-0 sshd-session[245553]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:16:38 compute-0 sshd-session[245553]: Failed password for root from 91.224.92.108 port 52240 ssh2
Oct 02 19:16:39 compute-0 unix_chkpwd[245556]: password check failed for user (root)
Oct 02 19:16:40 compute-0 podman[245557]: 2025-10-02 19:16:40.711481158 +0000 UTC m=+0.086111464 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:16:41 compute-0 sshd-session[245553]: Failed password for root from 91.224.92.108 port 52240 ssh2
Oct 02 19:16:42 compute-0 unix_chkpwd[245581]: password check failed for user (root)
Oct 02 19:16:44 compute-0 sshd-session[245553]: Failed password for root from 91.224.92.108 port 52240 ssh2
Oct 02 19:16:45 compute-0 sshd-session[245553]: Received disconnect from 91.224.92.108 port 52240:11:  [preauth]
Oct 02 19:16:45 compute-0 sshd-session[245553]: Disconnected from authenticating user root 91.224.92.108 port 52240 [preauth]
Oct 02 19:16:45 compute-0 sshd-session[245553]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:16:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:16:47.449 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:16:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:16:47.450 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:16:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:16:47.450 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:16:48 compute-0 podman[245582]: 2025-10-02 19:16:48.687906976 +0000 UTC m=+0.062213080 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Oct 02 19:16:48 compute-0 podman[245583]: 2025-10-02 19:16:48.729279037 +0000 UTC m=+0.100584659 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 19:16:51 compute-0 podman[245623]: 2025-10-02 19:16:51.700397563 +0000 UTC m=+0.074695851 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, container_name=kepler, version=9.4, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, build-date=2024-09-18T21:23:30, config_id=edpm, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct 02 19:16:53 compute-0 podman[245642]: 2025-10-02 19:16:53.730165951 +0000 UTC m=+0.108823864 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Oct 02 19:16:53 compute-0 podman[245643]: 2025-10-02 19:16:53.733261685 +0000 UTC m=+0.098958724 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 19:16:58 compute-0 podman[245683]: 2025-10-02 19:16:58.728205984 +0000 UTC m=+0.094714428 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 19:16:58 compute-0 podman[245682]: 2025-10-02 19:16:58.733751105 +0000 UTC m=+0.101167294 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:16:59 compute-0 podman[209015]: time="2025-10-02T19:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:16:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30748 "" "Go-http-client/1.1"
Oct 02 19:16:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4675 "" "Go-http-client/1.1"
Oct 02 19:17:01 compute-0 openstack_network_exporter[211160]: ERROR   19:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:17:01 compute-0 openstack_network_exporter[211160]: ERROR   19:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:17:01 compute-0 openstack_network_exporter[211160]: ERROR   19:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:17:01 compute-0 openstack_network_exporter[211160]: ERROR   19:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:17:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:17:01 compute-0 openstack_network_exporter[211160]: ERROR   19:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:17:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:17:02 compute-0 podman[245723]: 2025-10-02 19:17:02.70510333 +0000 UTC m=+0.077790596 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:17:02 compute-0 podman[245724]: 2025-10-02 19:17:02.757274025 +0000 UTC m=+0.116930566 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller)
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.068 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.068 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.068 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.069 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.353 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.354 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5707MB free_disk=72.56331634521484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.355 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.355 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.432 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.432 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.459 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.476 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.477 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:17:09 compute-0 nova_compute[194781]: 2025-10-02 19:17:09.477 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:17:10 compute-0 nova_compute[194781]: 2025-10-02 19:17:10.476 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:17:10 compute-0 nova_compute[194781]: 2025-10-02 19:17:10.477 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:17:10 compute-0 nova_compute[194781]: 2025-10-02 19:17:10.477 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:17:10 compute-0 nova_compute[194781]: 2025-10-02 19:17:10.498 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:17:10 compute-0 nova_compute[194781]: 2025-10-02 19:17:10.499 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:17:10 compute-0 nova_compute[194781]: 2025-10-02 19:17:10.499 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:17:11 compute-0 nova_compute[194781]: 2025-10-02 19:17:11.052 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:17:11 compute-0 podman[245767]: 2025-10-02 19:17:11.686384086 +0000 UTC m=+0.065262055 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:17:12 compute-0 nova_compute[194781]: 2025-10-02 19:17:12.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:17:12 compute-0 nova_compute[194781]: 2025-10-02 19:17:12.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.937 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.938 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.939 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.942 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.943 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.943 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.945 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.946 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.946 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.946 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.947 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.947 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.948 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.949 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.949 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.950 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.950 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.951 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.951 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.954 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.954 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.requests': [], 'network.incoming.packets.error': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.requests': [], 'network.incoming.packets.error': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc88260>] with cache [{}], pollster history [{'cpu': [], 'memory.usage': [], 'network.incoming.packets': [], 'network.incoming.bytes': [], 'power.state': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes': [], 'disk.device.capacity': [], 'disk.device.read.latency': [], 'network.outgoing.bytes.rate': [], 'disk.device.read.requests': [], 'network.incoming.packets.error': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.955 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.955 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.955 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.955 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.955 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.956 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.957 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:17:12.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:17:14 compute-0 nova_compute[194781]: 2025-10-02 19:17:14.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:17:14 compute-0 nova_compute[194781]: 2025-10-02 19:17:14.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:17:15 compute-0 sshd-session[244051]: Received disconnect from 38.102.83.227 port 47642:11: disconnected by user
Oct 02 19:17:15 compute-0 sshd-session[244051]: Disconnected from user zuul 38.102.83.227 port 47642
Oct 02 19:17:15 compute-0 sshd-session[244048]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:17:15 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Oct 02 19:17:15 compute-0 systemd[1]: session-30.scope: Consumed 9.754s CPU time.
Oct 02 19:17:15 compute-0 systemd-logind[798]: Session 30 logged out. Waiting for processes to exit.
Oct 02 19:17:15 compute-0 systemd-logind[798]: Removed session 30.
Oct 02 19:17:19 compute-0 podman[245792]: 2025-10-02 19:17:19.723973368 +0000 UTC m=+0.096009885 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 19:17:19 compute-0 podman[245793]: 2025-10-02 19:17:19.735011549 +0000 UTC m=+0.092897599 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Oct 02 19:17:22 compute-0 podman[245832]: 2025-10-02 19:17:22.750626574 +0000 UTC m=+0.126927869 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., release=1214.1726694543, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:17:24 compute-0 podman[245852]: 2025-10-02 19:17:24.707945038 +0000 UTC m=+0.080213155 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 19:17:24 compute-0 podman[245851]: 2025-10-02 19:17:24.717868568 +0000 UTC m=+0.086929218 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Oct 02 19:17:29 compute-0 podman[245891]: 2025-10-02 19:17:29.68205776 +0000 UTC m=+0.061907827 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:17:29 compute-0 podman[245892]: 2025-10-02 19:17:29.721710399 +0000 UTC m=+0.098693018 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:17:29 compute-0 podman[209015]: time="2025-10-02T19:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:17:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30748 "" "Go-http-client/1.1"
Oct 02 19:17:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4670 "" "Go-http-client/1.1"
Oct 02 19:17:31 compute-0 openstack_network_exporter[211160]: ERROR   19:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:17:31 compute-0 openstack_network_exporter[211160]: ERROR   19:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:17:31 compute-0 openstack_network_exporter[211160]: ERROR   19:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:17:31 compute-0 openstack_network_exporter[211160]: ERROR   19:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:17:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:17:31 compute-0 openstack_network_exporter[211160]: ERROR   19:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:17:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:17:33 compute-0 podman[245932]: 2025-10-02 19:17:33.705666476 +0000 UTC m=+0.079108775 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 19:17:33 compute-0 podman[245933]: 2025-10-02 19:17:33.73301113 +0000 UTC m=+0.107136548 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:17:42 compute-0 podman[245972]: 2025-10-02 19:17:42.730365027 +0000 UTC m=+0.105970916 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:17:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:17:47.451 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:17:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:17:47.452 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:17:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:17:47.452 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:17:50 compute-0 podman[245998]: 2025-10-02 19:17:50.706587416 +0000 UTC m=+0.075803055 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:17:50 compute-0 podman[245999]: 2025-10-02 19:17:50.740401787 +0000 UTC m=+0.094115564 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4)
Oct 02 19:17:53 compute-0 podman[246035]: 2025-10-02 19:17:53.738735081 +0000 UTC m=+0.117420088 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.buildah.version=1.29.0, vcs-type=git, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, name=ubi9, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, release=1214.1726694543)
Oct 02 19:17:55 compute-0 podman[246055]: 2025-10-02 19:17:55.722811234 +0000 UTC m=+0.092484508 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 02 19:17:55 compute-0 podman[246054]: 2025-10-02 19:17:55.733760612 +0000 UTC m=+0.101406131 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.buildah.version=1.33.7, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Oct 02 19:17:59 compute-0 podman[209015]: time="2025-10-02T19:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:17:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30748 "" "Go-http-client/1.1"
Oct 02 19:17:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4679 "" "Go-http-client/1.1"
Oct 02 19:18:00 compute-0 podman[246091]: 2025-10-02 19:18:00.751878051 +0000 UTC m=+0.111908737 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:18:00 compute-0 podman[246092]: 2025-10-02 19:18:00.781129718 +0000 UTC m=+0.135434608 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible)
Oct 02 19:18:01 compute-0 openstack_network_exporter[211160]: ERROR   19:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:18:01 compute-0 openstack_network_exporter[211160]: ERROR   19:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:18:01 compute-0 openstack_network_exporter[211160]: ERROR   19:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:18:01 compute-0 openstack_network_exporter[211160]: ERROR   19:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:18:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:18:01 compute-0 openstack_network_exporter[211160]: ERROR   19:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:18:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:18:04 compute-0 podman[246130]: 2025-10-02 19:18:04.709782327 +0000 UTC m=+0.090790813 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:18:04 compute-0 podman[246131]: 2025-10-02 19:18:04.745816558 +0000 UTC m=+0.123147013 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller)
Oct 02 19:18:10 compute-0 nova_compute[194781]: 2025-10-02 19:18:10.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.078 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.079 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.079 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.079 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.395 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.397 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5708MB free_disk=72.56331634521484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.397 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.398 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.472 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.473 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.505 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.525 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.528 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:18:11 compute-0 nova_compute[194781]: 2025-10-02 19:18:11.529 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:18:12 compute-0 nova_compute[194781]: 2025-10-02 19:18:12.531 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:18:12 compute-0 nova_compute[194781]: 2025-10-02 19:18:12.532 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:18:12 compute-0 nova_compute[194781]: 2025-10-02 19:18:12.532 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:18:12 compute-0 nova_compute[194781]: 2025-10-02 19:18:12.551 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:18:12 compute-0 nova_compute[194781]: 2025-10-02 19:18:12.551 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:18:12 compute-0 nova_compute[194781]: 2025-10-02 19:18:12.552 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:18:13 compute-0 nova_compute[194781]: 2025-10-02 19:18:13.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:18:13 compute-0 nova_compute[194781]: 2025-10-02 19:18:13.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:18:13 compute-0 nova_compute[194781]: 2025-10-02 19:18:13.050 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:18:13 compute-0 podman[246174]: 2025-10-02 19:18:13.71241727 +0000 UTC m=+0.082635481 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:18:14 compute-0 nova_compute[194781]: 2025-10-02 19:18:14.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:18:14 compute-0 nova_compute[194781]: 2025-10-02 19:18:14.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:18:21 compute-0 podman[246198]: 2025-10-02 19:18:21.758319875 +0000 UTC m=+0.119484614 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:18:21 compute-0 podman[246199]: 2025-10-02 19:18:21.763088015 +0000 UTC m=+0.133174027 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:18:24 compute-0 podman[246238]: 2025-10-02 19:18:24.684029082 +0000 UTC m=+0.062976266 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm)
Oct 02 19:18:26 compute-0 podman[246259]: 2025-10-02 19:18:26.701135144 +0000 UTC m=+0.073548583 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:18:26 compute-0 podman[246258]: 2025-10-02 19:18:26.76859579 +0000 UTC m=+0.128802807 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, name=ubi9-minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Oct 02 19:18:29 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:18:29.385 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:18:29 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:18:29.386 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:18:29 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:18:29.387 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:18:29 compute-0 podman[209015]: time="2025-10-02T19:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:18:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30748 "" "Go-http-client/1.1"
Oct 02 19:18:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4685 "" "Go-http-client/1.1"
Oct 02 19:18:31 compute-0 openstack_network_exporter[211160]: ERROR   19:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:18:31 compute-0 openstack_network_exporter[211160]: ERROR   19:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:18:31 compute-0 openstack_network_exporter[211160]: ERROR   19:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:18:31 compute-0 openstack_network_exporter[211160]: ERROR   19:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:18:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:18:31 compute-0 openstack_network_exporter[211160]: ERROR   19:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:18:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:18:31 compute-0 podman[246298]: 2025-10-02 19:18:31.707623178 +0000 UTC m=+0.073891473 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:18:31 compute-0 podman[246297]: 2025-10-02 19:18:31.713713664 +0000 UTC m=+0.088457430 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:18:35 compute-0 podman[246338]: 2025-10-02 19:18:35.762076594 +0000 UTC m=+0.126955137 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 19:18:35 compute-0 podman[246339]: 2025-10-02 19:18:35.819400654 +0000 UTC m=+0.187317710 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:18:44 compute-0 podman[246383]: 2025-10-02 19:18:44.684587973 +0000 UTC m=+0.067048817 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:18:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:18:47.453 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:18:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:18:47.453 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:18:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:18:47.454 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:18:52 compute-0 podman[246407]: 2025-10-02 19:18:52.718712019 +0000 UTC m=+0.093100016 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=edpm)
Oct 02 19:18:52 compute-0 podman[246408]: 2025-10-02 19:18:52.740472201 +0000 UTC m=+0.100095706 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Oct 02 19:18:55 compute-0 podman[246447]: 2025-10-02 19:18:55.741655693 +0000 UTC m=+0.108330440 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=kepler, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.tags=base rhel9)
Oct 02 19:18:57 compute-0 podman[246464]: 2025-10-02 19:18:57.744299821 +0000 UTC m=+0.115167606 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=edpm, name=ubi9-minimal, vcs-type=git, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, version=9.6, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64)
Oct 02 19:18:57 compute-0 podman[246465]: 2025-10-02 19:18:57.744395374 +0000 UTC m=+0.109319697 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:18:59 compute-0 podman[209015]: time="2025-10-02T19:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:18:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30748 "" "Go-http-client/1.1"
Oct 02 19:18:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4682 "" "Go-http-client/1.1"
Oct 02 19:19:01 compute-0 openstack_network_exporter[211160]: ERROR   19:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:19:01 compute-0 openstack_network_exporter[211160]: ERROR   19:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:19:01 compute-0 openstack_network_exporter[211160]: ERROR   19:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:19:01 compute-0 openstack_network_exporter[211160]: ERROR   19:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:19:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:19:01 compute-0 openstack_network_exporter[211160]: ERROR   19:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:19:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:19:02 compute-0 podman[246505]: 2025-10-02 19:19:02.744487763 +0000 UTC m=+0.122254739 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:19:02 compute-0 podman[246506]: 2025-10-02 19:19:02.787160465 +0000 UTC m=+0.145854492 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 19:19:06 compute-0 podman[246546]: 2025-10-02 19:19:06.776871969 +0000 UTC m=+0.129883937 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:19:06 compute-0 podman[246547]: 2025-10-02 19:19:06.78903192 +0000 UTC m=+0.146952512 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.070 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.071 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.071 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.072 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.380 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.381 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5684MB free_disk=72.56331634521484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.382 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.382 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.448 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.448 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.470 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.484 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.485 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:19:11 compute-0 nova_compute[194781]: 2025-10-02 19:19:11.485 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:12 compute-0 nova_compute[194781]: 2025-10-02 19:19:12.484 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:19:12 compute-0 nova_compute[194781]: 2025-10-02 19:19:12.485 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.938 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.938 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.938 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.939 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9eb28440>] with cache [{}], pollster history [{'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.943 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.945 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.946 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.946 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.947 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.947 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.948 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.949 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.950 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.950 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.951 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.951 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.952 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.953 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.953 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.954 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.954 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.955 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:19:12.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:19:13 compute-0 nova_compute[194781]: 2025-10-02 19:19:13.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:19:13 compute-0 nova_compute[194781]: 2025-10-02 19:19:13.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:19:14 compute-0 nova_compute[194781]: 2025-10-02 19:19:14.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:19:14 compute-0 nova_compute[194781]: 2025-10-02 19:19:14.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:19:14 compute-0 nova_compute[194781]: 2025-10-02 19:19:14.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:19:14 compute-0 nova_compute[194781]: 2025-10-02 19:19:14.049 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:19:14 compute-0 nova_compute[194781]: 2025-10-02 19:19:14.050 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:19:14 compute-0 nova_compute[194781]: 2025-10-02 19:19:14.051 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:19:14 compute-0 nova_compute[194781]: 2025-10-02 19:19:14.051 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:19:15 compute-0 nova_compute[194781]: 2025-10-02 19:19:15.045 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:19:15 compute-0 podman[246588]: 2025-10-02 19:19:15.746652414 +0000 UTC m=+0.111097645 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:19:19 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:19.176 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:19:19 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:19.178 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:19:23 compute-0 podman[246614]: 2025-10-02 19:19:23.737083451 +0000 UTC m=+0.097073523 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Oct 02 19:19:23 compute-0 podman[246613]: 2025-10-02 19:19:23.74292705 +0000 UTC m=+0.100508697 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 19:19:26 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:26.181 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:19:26 compute-0 podman[246649]: 2025-10-02 19:19:26.721447205 +0000 UTC m=+0.090972168 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, release=1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_id=edpm, maintainer=Red Hat, Inc., release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct 02 19:19:28 compute-0 podman[246669]: 2025-10-02 19:19:28.70669606 +0000 UTC m=+0.079003162 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, release=1755695350, config_id=edpm, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:19:28 compute-0 podman[246670]: 2025-10-02 19:19:28.72066308 +0000 UTC m=+0.086772463 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:19:29 compute-0 podman[209015]: time="2025-10-02T19:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:19:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30748 "" "Go-http-client/1.1"
Oct 02 19:19:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4686 "" "Go-http-client/1.1"
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.440 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.441 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.462 2 DEBUG nova.compute.manager [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.578 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.579 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.589 2 DEBUG nova.virt.hardware [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.589 2 INFO nova.compute.claims [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.710 2 DEBUG nova.compute.provider_tree [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.740 2 DEBUG nova.scheduler.client.report [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.769 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.770 2 DEBUG nova.compute.manager [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.813 2 DEBUG nova.compute.manager [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.813 2 DEBUG nova.network.neutron [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.834 2 INFO nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.873 2 DEBUG nova.compute.manager [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.978 2 DEBUG nova.compute.manager [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.980 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.981 2 INFO nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Creating image(s)
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.983 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "/var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.984 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.985 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.986 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:30 compute-0 nova_compute[194781]: 2025-10-02 19:19:30.988 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:31 compute-0 openstack_network_exporter[211160]: ERROR   19:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:19:31 compute-0 openstack_network_exporter[211160]: ERROR   19:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:19:31 compute-0 openstack_network_exporter[211160]: ERROR   19:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:19:31 compute-0 openstack_network_exporter[211160]: ERROR   19:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:19:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:19:31 compute-0 openstack_network_exporter[211160]: ERROR   19:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:19:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:19:32 compute-0 nova_compute[194781]: 2025-10-02 19:19:32.221 2 WARNING oslo_policy.policy [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 02 19:19:32 compute-0 nova_compute[194781]: 2025-10-02 19:19:32.221 2 WARNING oslo_policy.policy [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 02 19:19:32 compute-0 nova_compute[194781]: 2025-10-02 19:19:32.539 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:32 compute-0 nova_compute[194781]: 2025-10-02 19:19:32.633 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d.part --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:32 compute-0 nova_compute[194781]: 2025-10-02 19:19:32.635 2 DEBUG nova.virt.images [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] 2c6780ee-8ca6-4dab-831c-c89907768547 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 19:19:32 compute-0 nova_compute[194781]: 2025-10-02 19:19:32.642 2 DEBUG nova.privsep.utils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 19:19:32 compute-0 nova_compute[194781]: 2025-10-02 19:19:32.643 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d.part /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:32 compute-0 nova_compute[194781]: 2025-10-02 19:19:32.972 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d.part /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d.converted" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:32 compute-0 nova_compute[194781]: 2025-10-02 19:19:32.979 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:33 compute-0 nova_compute[194781]: 2025-10-02 19:19:33.060 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d.converted --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:33 compute-0 nova_compute[194781]: 2025-10-02 19:19:33.062 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:33 compute-0 nova_compute[194781]: 2025-10-02 19:19:33.073 2 INFO oslo.privsep.daemon [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp71500wvt/privsep.sock']
Oct 02 19:19:33 compute-0 podman[246725]: 2025-10-02 19:19:33.716598172 +0000 UTC m=+0.087314402 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:19:33 compute-0 nova_compute[194781]: 2025-10-02 19:19:33.758 2 INFO oslo.privsep.daemon [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Spawned new privsep daemon via rootwrap
Oct 02 19:19:33 compute-0 nova_compute[194781]: 2025-10-02 19:19:33.614 52 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 19:19:33 compute-0 nova_compute[194781]: 2025-10-02 19:19:33.621 52 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 19:19:33 compute-0 nova_compute[194781]: 2025-10-02 19:19:33.625 52 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 19:19:33 compute-0 nova_compute[194781]: 2025-10-02 19:19:33.625 52 INFO oslo.privsep.daemon [-] privsep daemon running as pid 52
Oct 02 19:19:33 compute-0 podman[246727]: 2025-10-02 19:19:33.765043778 +0000 UTC m=+0.122627201 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 19:19:33 compute-0 nova_compute[194781]: 2025-10-02 19:19:33.861 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:33 compute-0 nova_compute[194781]: 2025-10-02 19:19:33.913 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:33 compute-0 nova_compute[194781]: 2025-10-02 19:19:33.915 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:33 compute-0 nova_compute[194781]: 2025-10-02 19:19:33.917 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:33 compute-0 nova_compute[194781]: 2025-10-02 19:19:33.942 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:33.998 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.001 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d,backing_fmt=raw /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.044 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d,backing_fmt=raw /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.046 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.047 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.145 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.147 2 DEBUG nova.virt.disk.api [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Checking if we can resize image /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.148 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.246 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.248 2 DEBUG nova.virt.disk.api [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Cannot resize image /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.249 2 DEBUG nova.objects.instance [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lazy-loading 'migration_context' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.274 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "/var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.275 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.277 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.278 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.280 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.281 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.308 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.310 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.381 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.383 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.409 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.502 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.504 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.505 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.529 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.619 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.621 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.725 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 1073741824" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.727 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.728 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.791 2 DEBUG nova.network.neutron [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Successfully created port: db098052-6623-4e4a-9fb7-65b4006efb6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.797 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.799 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.800 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Ensure instance console log exists: /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.801 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.802 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:34 compute-0 nova_compute[194781]: 2025-10-02 19:19:34.803 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:36 compute-0 nova_compute[194781]: 2025-10-02 19:19:36.418 2 DEBUG nova.network.neutron [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Successfully updated port: db098052-6623-4e4a-9fb7-65b4006efb6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:19:36 compute-0 nova_compute[194781]: 2025-10-02 19:19:36.436 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:19:36 compute-0 nova_compute[194781]: 2025-10-02 19:19:36.437 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:19:36 compute-0 nova_compute[194781]: 2025-10-02 19:19:36.437 2 DEBUG nova.network.neutron [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:19:36 compute-0 nova_compute[194781]: 2025-10-02 19:19:36.660 2 DEBUG nova.network.neutron [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:19:36 compute-0 nova_compute[194781]: 2025-10-02 19:19:36.927 2 DEBUG nova.compute.manager [req-3480a93d-68b0-4042-8a2c-3630ae8266d3 req-0617f452-2108-4843-9e74-99a6ee6bee96 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Received event network-changed-db098052-6623-4e4a-9fb7-65b4006efb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:19:36 compute-0 nova_compute[194781]: 2025-10-02 19:19:36.928 2 DEBUG nova.compute.manager [req-3480a93d-68b0-4042-8a2c-3630ae8266d3 req-0617f452-2108-4843-9e74-99a6ee6bee96 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Refreshing instance network info cache due to event network-changed-db098052-6623-4e4a-9fb7-65b4006efb6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:19:36 compute-0 nova_compute[194781]: 2025-10-02 19:19:36.929 2 DEBUG oslo_concurrency.lockutils [req-3480a93d-68b0-4042-8a2c-3630ae8266d3 req-0617f452-2108-4843-9e74-99a6ee6bee96 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.731 2 DEBUG nova.network.neutron [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:19:37 compute-0 podman[246799]: 2025-10-02 19:19:37.738603778 +0000 UTC m=+0.111347029 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.764 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.765 2 DEBUG nova.compute.manager [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Instance network_info: |[{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.765 2 DEBUG oslo_concurrency.lockutils [req-3480a93d-68b0-4042-8a2c-3630ae8266d3 req-0617f452-2108-4843-9e74-99a6ee6bee96 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.765 2 DEBUG nova.network.neutron [req-3480a93d-68b0-4042-8a2c-3630ae8266d3 req-0617f452-2108-4843-9e74-99a6ee6bee96 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Refreshing network info cache for port db098052-6623-4e4a-9fb7-65b4006efb6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.769 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Start _get_guest_xml network_info=[{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:18:19Z,direct_url=<?>,disk_format='qcow2',id=2c6780ee-8ca6-4dab-831c-c89907768547,min_disk=0,min_ram=0,name='cirros',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '2c6780ee-8ca6-4dab-831c-c89907768547'}], 'ephemerals': [{'encrypted': False, 'size': 1, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encryption_options': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.777 2 WARNING nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.785 2 DEBUG nova.virt.libvirt.host [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.786 2 DEBUG nova.virt.libvirt.host [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.794 2 DEBUG nova.virt.libvirt.host [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.795 2 DEBUG nova.virt.libvirt.host [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.795 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.796 2 DEBUG nova.virt.hardware [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:18:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='9b897399-e7fe-4a3e-9cc1-c1f819a27557',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:18:19Z,direct_url=<?>,disk_format='qcow2',id=2c6780ee-8ca6-4dab-831c-c89907768547,min_disk=0,min_ram=0,name='cirros',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.796 2 DEBUG nova.virt.hardware [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.796 2 DEBUG nova.virt.hardware [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.797 2 DEBUG nova.virt.hardware [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.797 2 DEBUG nova.virt.hardware [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.797 2 DEBUG nova.virt.hardware [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.798 2 DEBUG nova.virt.hardware [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.798 2 DEBUG nova.virt.hardware [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.798 2 DEBUG nova.virt.hardware [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.798 2 DEBUG nova.virt.hardware [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.799 2 DEBUG nova.virt.hardware [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.802 2 DEBUG nova.privsep.utils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.803 2 DEBUG nova.virt.libvirt.vif [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:19:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6bd7784161a4cc3a2e8715feee92228',ramdisk_id='',reservation_id='r-j4qd9h1j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:19:30Z,user_data=None,user_id='5e0565a40c4e40f9ab77ce190f9527c5',uuid=7aab78e5-2ff6-460d-87d6-f4c21f2d4403,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.804 2 DEBUG nova.network.os_vif_util [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converting VIF {"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:19:37 compute-0 podman[246800]: 2025-10-02 19:19:37.805112497 +0000 UTC m=+0.165486754 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.805 2 DEBUG nova.network.os_vif_util [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:88:9d,bridge_name='br-int',has_traffic_filtering=True,id=db098052-6623-4e4a-9fb7-65b4006efb6f,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb098052-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.806 2 DEBUG nova.objects.instance [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.822 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:19:37 compute-0 nova_compute[194781]:   <uuid>7aab78e5-2ff6-460d-87d6-f4c21f2d4403</uuid>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   <name>instance-00000001</name>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   <memory>524288</memory>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <nova:name>test_0</nova:name>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:19:37</nova:creationTime>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <nova:flavor name="m1.small">
Oct 02 19:19:37 compute-0 nova_compute[194781]:         <nova:memory>512</nova:memory>
Oct 02 19:19:37 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:19:37 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:19:37 compute-0 nova_compute[194781]:         <nova:ephemeral>1</nova:ephemeral>
Oct 02 19:19:37 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:19:37 compute-0 nova_compute[194781]:         <nova:user uuid="5e0565a40c4e40f9ab77ce190f9527c5">admin</nova:user>
Oct 02 19:19:37 compute-0 nova_compute[194781]:         <nova:project uuid="c6bd7784161a4cc3a2e8715feee92228">admin</nova:project>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="2c6780ee-8ca6-4dab-831c-c89907768547"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:19:37 compute-0 nova_compute[194781]:         <nova:port uuid="db098052-6623-4e4a-9fb7-65b4006efb6f">
Oct 02 19:19:37 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="192.168.0.201" ipVersion="4"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <system>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <entry name="serial">7aab78e5-2ff6-460d-87d6-f4c21f2d4403</entry>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <entry name="uuid">7aab78e5-2ff6-460d-87d6-f4c21f2d4403</entry>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     </system>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   <os>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   </os>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   <features>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   </features>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <target dev="vdb" bus="virtio"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.config"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:85:88:9d"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <target dev="tapdb098052-66"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/console.log" append="off"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <video>
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     </video>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:19:37 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:19:37 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:19:37 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:19:37 compute-0 nova_compute[194781]: </domain>
Oct 02 19:19:37 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.824 2 DEBUG nova.compute.manager [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Preparing to wait for external event network-vif-plugged-db098052-6623-4e4a-9fb7-65b4006efb6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.824 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.824 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.825 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.825 2 DEBUG nova.virt.libvirt.vif [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:19:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6bd7784161a4cc3a2e8715feee92228',ramdisk_id='',reservation_id='r-j4qd9h1j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:19:30Z,user_data=None,user_id='5e0565a40c4e40f9ab77ce190f9527c5',uuid=7aab78e5-2ff6-460d-87d6-f4c21f2d4403,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.826 2 DEBUG nova.network.os_vif_util [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converting VIF {"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.826 2 DEBUG nova.network.os_vif_util [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:88:9d,bridge_name='br-int',has_traffic_filtering=True,id=db098052-6623-4e4a-9fb7-65b4006efb6f,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb098052-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.827 2 DEBUG os_vif [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:88:9d,bridge_name='br-int',has_traffic_filtering=True,id=db098052-6623-4e4a-9fb7-65b4006efb6f,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb098052-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.858 2 DEBUG ovsdbapp.backend.ovs_idl [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.859 2 DEBUG ovsdbapp.backend.ovs_idl [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.859 2 DEBUG ovsdbapp.backend.ovs_idl [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.874 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.874 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:19:37 compute-0 nova_compute[194781]: 2025-10-02 19:19:37.875 2 INFO oslo.privsep.daemon [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpri02ci0p/privsep.sock']
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.577 2 INFO oslo.privsep.daemon [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Spawned new privsep daemon via rootwrap
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.431 89 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.439 89 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.443 89 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.443 89 INFO oslo.privsep.daemon [-] privsep daemon running as pid 89
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.874 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb098052-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.875 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb098052-66, col_values=(('external_ids', {'iface-id': 'db098052-6623-4e4a-9fb7-65b4006efb6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:88:9d', 'vm-uuid': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:38 compute-0 NetworkManager[52324]: <info>  [1759432778.8799] manager: (tapdb098052-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.896 2 INFO os_vif [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:88:9d,bridge_name='br-int',has_traffic_filtering=True,id=db098052-6623-4e4a-9fb7-65b4006efb6f,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb098052-66')
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.966 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.967 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.967 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.968 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No VIF found with MAC fa:16:3e:85:88:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:19:38 compute-0 nova_compute[194781]: 2025-10-02 19:19:38.968 2 INFO nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Using config drive
Oct 02 19:19:39 compute-0 nova_compute[194781]: 2025-10-02 19:19:39.506 2 DEBUG nova.network.neutron [req-3480a93d-68b0-4042-8a2c-3630ae8266d3 req-0617f452-2108-4843-9e74-99a6ee6bee96 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated VIF entry in instance network info cache for port db098052-6623-4e4a-9fb7-65b4006efb6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:19:39 compute-0 nova_compute[194781]: 2025-10-02 19:19:39.507 2 DEBUG nova.network.neutron [req-3480a93d-68b0-4042-8a2c-3630ae8266d3 req-0617f452-2108-4843-9e74-99a6ee6bee96 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:19:39 compute-0 nova_compute[194781]: 2025-10-02 19:19:39.518 2 INFO nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Creating config drive at /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.config
Oct 02 19:19:39 compute-0 nova_compute[194781]: 2025-10-02 19:19:39.528 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphfg6mjdd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:19:39 compute-0 nova_compute[194781]: 2025-10-02 19:19:39.549 2 DEBUG oslo_concurrency.lockutils [req-3480a93d-68b0-4042-8a2c-3630ae8266d3 req-0617f452-2108-4843-9e74-99a6ee6bee96 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:19:39 compute-0 nova_compute[194781]: 2025-10-02 19:19:39.654 2 DEBUG oslo_concurrency.processutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphfg6mjdd" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:19:39 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 02 19:19:39 compute-0 kernel: tapdb098052-66: entered promiscuous mode
Oct 02 19:19:39 compute-0 NetworkManager[52324]: <info>  [1759432779.7880] manager: (tapdb098052-66): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Oct 02 19:19:39 compute-0 ovn_controller[97052]: 2025-10-02T19:19:39Z|00027|binding|INFO|Claiming lport db098052-6623-4e4a-9fb7-65b4006efb6f for this chassis.
Oct 02 19:19:39 compute-0 nova_compute[194781]: 2025-10-02 19:19:39.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:39 compute-0 ovn_controller[97052]: 2025-10-02T19:19:39Z|00028|binding|INFO|db098052-6623-4e4a-9fb7-65b4006efb6f: Claiming fa:16:3e:85:88:9d 192.168.0.201
Oct 02 19:19:39 compute-0 nova_compute[194781]: 2025-10-02 19:19:39.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:39 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:39.814 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:88:9d 192.168.0.201'], port_security=['fa:16:3e:85:88:9d 192.168.0.201'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.201/24', 'neutron:device_id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5760fda-9195-4e68-8506-4362bf1edf4f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6bd7784161a4cc3a2e8715feee92228', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72aaa87c-2798-4a9c-ab16-34693e3fe341', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21963977-c089-41a8-8d06-e659a781ceff, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=db098052-6623-4e4a-9fb7-65b4006efb6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:19:39 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:39.817 105943 INFO neutron.agent.ovn.metadata.agent [-] Port db098052-6623-4e4a-9fb7-65b4006efb6f in datapath b5760fda-9195-4e68-8506-4362bf1edf4f bound to our chassis
Oct 02 19:19:39 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:39.821 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5760fda-9195-4e68-8506-4362bf1edf4f
Oct 02 19:19:39 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:39.822 105943 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpr7y6iizv/privsep.sock']
Oct 02 19:19:39 compute-0 systemd-udevd[246872]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:19:39 compute-0 NetworkManager[52324]: <info>  [1759432779.8570] device (tapdb098052-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:19:39 compute-0 NetworkManager[52324]: <info>  [1759432779.8579] device (tapdb098052-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:19:39 compute-0 nova_compute[194781]: 2025-10-02 19:19:39.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:39 compute-0 ovn_controller[97052]: 2025-10-02T19:19:39Z|00029|binding|INFO|Setting lport db098052-6623-4e4a-9fb7-65b4006efb6f ovn-installed in OVS
Oct 02 19:19:39 compute-0 ovn_controller[97052]: 2025-10-02T19:19:39Z|00030|binding|INFO|Setting lport db098052-6623-4e4a-9fb7-65b4006efb6f up in Southbound
Oct 02 19:19:39 compute-0 nova_compute[194781]: 2025-10-02 19:19:39.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:39 compute-0 systemd-machined[154795]: New machine qemu-1-instance-00000001.
Oct 02 19:19:39 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct 02 19:19:40 compute-0 nova_compute[194781]: 2025-10-02 19:19:40.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:40 compute-0 nova_compute[194781]: 2025-10-02 19:19:40.425 2 DEBUG nova.compute.manager [req-ee133183-bd56-4f2f-b6e0-83588a5ad3f6 req-c752e684-a82a-4eb6-816f-9af0ddaa49af fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Received event network-vif-plugged-db098052-6623-4e4a-9fb7-65b4006efb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:19:40 compute-0 nova_compute[194781]: 2025-10-02 19:19:40.426 2 DEBUG oslo_concurrency.lockutils [req-ee133183-bd56-4f2f-b6e0-83588a5ad3f6 req-c752e684-a82a-4eb6-816f-9af0ddaa49af fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:40 compute-0 nova_compute[194781]: 2025-10-02 19:19:40.427 2 DEBUG oslo_concurrency.lockutils [req-ee133183-bd56-4f2f-b6e0-83588a5ad3f6 req-c752e684-a82a-4eb6-816f-9af0ddaa49af fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:40 compute-0 nova_compute[194781]: 2025-10-02 19:19:40.427 2 DEBUG oslo_concurrency.lockutils [req-ee133183-bd56-4f2f-b6e0-83588a5ad3f6 req-c752e684-a82a-4eb6-816f-9af0ddaa49af fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:40 compute-0 nova_compute[194781]: 2025-10-02 19:19:40.428 2 DEBUG nova.compute.manager [req-ee133183-bd56-4f2f-b6e0-83588a5ad3f6 req-c752e684-a82a-4eb6-816f-9af0ddaa49af fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Processing event network-vif-plugged-db098052-6623-4e4a-9fb7-65b4006efb6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:19:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:40.592 105943 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 19:19:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:40.592 105943 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpr7y6iizv/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 19:19:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:40.456 246899 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 19:19:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:40.471 246899 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 19:19:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:40.481 246899 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 02 19:19:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:40.483 246899 INFO oslo.privsep.daemon [-] privsep daemon running as pid 246899
Oct 02 19:19:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:40.596 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e2938f-beb8-41a8-a3e7-08f6be1a90e7]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.106 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759432781.1051137, 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.107 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] VM Started (Lifecycle Event)
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.110 2 DEBUG nova.compute.manager [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.118 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:19:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:41.128 246899 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:41.128 246899 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:41.129 246899 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.134 2 INFO nova.virt.libvirt.driver [-] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Instance spawned successfully.
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.135 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.195 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.202 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.220 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.220 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.221 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.222 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.222 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.223 2 DEBUG nova.virt.libvirt.driver [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.229 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.229 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759432781.1052547, 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.230 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] VM Paused (Lifecycle Event)
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.295 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.301 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759432781.1149113, 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.302 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] VM Resumed (Lifecycle Event)
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.334 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.340 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.344 2 INFO nova.compute.manager [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Took 10.37 seconds to spawn the instance on the hypervisor.
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.345 2 DEBUG nova.compute.manager [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.364 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.422 2 INFO nova.compute.manager [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Took 10.88 seconds to build instance.
Oct 02 19:19:41 compute-0 nova_compute[194781]: 2025-10-02 19:19:41.440 2 DEBUG oslo_concurrency.lockutils [None req-c8113b1d-d171-44ac-958e-86fdd94ec009 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:41 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 19:19:41 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 19:19:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:41.686 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[13ce7a39-6678-406d-9ff5-4d1811986e3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:41.687 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5760fda-91 in ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 19:19:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:41.689 246899 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5760fda-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 19:19:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:41.689 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[08960332-7d25-4588-9875-69dce142f745]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:41.692 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[23248fa6-38a7-4a6e-876b-985710947eea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:41.718 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1a661d-80d7-48af-8b29-f9e4bcfcdfd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:41.744 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[84a660af-c6f2-48f6-8a97-20b5cb4c22c9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:41.747 105943 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpux4ntyxx/privsep.sock']
Oct 02 19:19:42 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:42.405 105943 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 19:19:42 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:42.407 105943 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpux4ntyxx/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 19:19:42 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:42.257 246930 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 19:19:42 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:42.262 246930 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 19:19:42 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:42.265 246930 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 19:19:42 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:42.265 246930 INFO oslo.privsep.daemon [-] privsep daemon running as pid 246930
Oct 02 19:19:42 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:42.412 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[17b4da7d-0762-49bf-8a2f-a3f9e43b4671]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:42 compute-0 nova_compute[194781]: 2025-10-02 19:19:42.512 2 DEBUG nova.compute.manager [req-e5818f6b-cefa-4ac6-99b9-f82c82fd78ae req-9ec33bac-b89f-4477-836e-4a1844179bd8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Received event network-vif-plugged-db098052-6623-4e4a-9fb7-65b4006efb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:19:42 compute-0 nova_compute[194781]: 2025-10-02 19:19:42.512 2 DEBUG oslo_concurrency.lockutils [req-e5818f6b-cefa-4ac6-99b9-f82c82fd78ae req-9ec33bac-b89f-4477-836e-4a1844179bd8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:42 compute-0 nova_compute[194781]: 2025-10-02 19:19:42.513 2 DEBUG oslo_concurrency.lockutils [req-e5818f6b-cefa-4ac6-99b9-f82c82fd78ae req-9ec33bac-b89f-4477-836e-4a1844179bd8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:42 compute-0 nova_compute[194781]: 2025-10-02 19:19:42.513 2 DEBUG oslo_concurrency.lockutils [req-e5818f6b-cefa-4ac6-99b9-f82c82fd78ae req-9ec33bac-b89f-4477-836e-4a1844179bd8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:42 compute-0 nova_compute[194781]: 2025-10-02 19:19:42.514 2 DEBUG nova.compute.manager [req-e5818f6b-cefa-4ac6-99b9-f82c82fd78ae req-9ec33bac-b89f-4477-836e-4a1844179bd8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] No waiting events found dispatching network-vif-plugged-db098052-6623-4e4a-9fb7-65b4006efb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:19:42 compute-0 nova_compute[194781]: 2025-10-02 19:19:42.514 2 WARNING nova.compute.manager [req-e5818f6b-cefa-4ac6-99b9-f82c82fd78ae req-9ec33bac-b89f-4477-836e-4a1844179bd8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Received unexpected event network-vif-plugged-db098052-6623-4e4a-9fb7-65b4006efb6f for instance with vm_state active and task_state None.
Oct 02 19:19:42 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:42.924 246930 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:42 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:42.924 246930 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:42 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:42.924 246930 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.502 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[88e08d03-94e9-46db-9786-7aaabc1e64c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.509 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[83a9c1e4-8916-4b3d-ac48-f5bbe3807abf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:43 compute-0 NetworkManager[52324]: <info>  [1759432783.5109] manager: (tapb5760fda-90): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.539 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[904b914f-a9e2-4a30-9a10-4bc74f8384eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.548 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[b47cfb96-e2c8-44d8-9dc3-8179570d0fee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:43 compute-0 systemd-udevd[246940]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:19:43 compute-0 NetworkManager[52324]: <info>  [1759432783.5779] device (tapb5760fda-90): carrier: link connected
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.584 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab64f1a-c097-4a58-a5d1-a89558fbfebb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.606 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[06a5203c-4f97-4742-9df2-b4afd905731c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5760fda-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:0b:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394420, 'reachable_time': 16487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246959, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.625 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[e4be8b82-13f7-4363-b9d0-dd0218eaa4a5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:b97'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394420, 'tstamp': 394420}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246960, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.642 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[a572fabd-9d62-4cf9-951a-80cd4ddc3653]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5760fda-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:0b:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394420, 'reachable_time': 16487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 246961, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.675 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[fcfe82f9-e12e-4137-ac7b-0ab49897c3fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.733 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2908d8-8167-4424-b7cd-20e558124a97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.735 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5760fda-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.736 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.736 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5760fda-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:19:43 compute-0 nova_compute[194781]: 2025-10-02 19:19:43.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:43 compute-0 kernel: tapb5760fda-90: entered promiscuous mode
Oct 02 19:19:43 compute-0 NetworkManager[52324]: <info>  [1759432783.7401] manager: (tapb5760fda-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Oct 02 19:19:43 compute-0 nova_compute[194781]: 2025-10-02 19:19:43.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.747 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5760fda-90, col_values=(('external_ids', {'iface-id': '8a91c2ef-c369-46ce-8154-e9505f04ef0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:19:43 compute-0 nova_compute[194781]: 2025-10-02 19:19:43.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:43 compute-0 ovn_controller[97052]: 2025-10-02T19:19:43Z|00031|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:19:43 compute-0 nova_compute[194781]: 2025-10-02 19:19:43.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.754 105943 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5760fda-9195-4e68-8506-4362bf1edf4f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5760fda-9195-4e68-8506-4362bf1edf4f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.756 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[30b68b15-6eac-4a8f-9f8d-2134450f828e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.757 105943 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: global
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     log         /dev/log local0 debug
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     log-tag     haproxy-metadata-proxy-b5760fda-9195-4e68-8506-4362bf1edf4f
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     user        root
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     group       root
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     maxconn     1024
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     pidfile     /var/lib/neutron/external/pids/b5760fda-9195-4e68-8506-4362bf1edf4f.pid.haproxy
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     daemon
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: defaults
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     log global
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     mode http
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     option httplog
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     option dontlognull
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     option http-server-close
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     option forwardfor
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     retries                 3
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     timeout http-request    30s
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     timeout connect         30s
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     timeout client          32s
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     timeout server          32s
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     timeout http-keep-alive 30s
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: listen listener
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     bind 169.254.169.254:80
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:     http-request add-header X-OVN-Network-ID b5760fda-9195-4e68-8506-4362bf1edf4f
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 19:19:43 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:43.758 105943 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'env', 'PROCESS_TAG=haproxy-b5760fda-9195-4e68-8506-4362bf1edf4f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5760fda-9195-4e68-8506-4362bf1edf4f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 19:19:43 compute-0 nova_compute[194781]: 2025-10-02 19:19:43.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:43 compute-0 nova_compute[194781]: 2025-10-02 19:19:43.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:44 compute-0 podman[246994]: 2025-10-02 19:19:44.189832427 +0000 UTC m=+0.075963368 container create d0d7d635cce0eaeb97185ff58f5abf81d178d07dffcd12fe777a48b47de892ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 19:19:44 compute-0 systemd[1]: Started libpod-conmon-d0d7d635cce0eaeb97185ff58f5abf81d178d07dffcd12fe777a48b47de892ed.scope.
Oct 02 19:19:44 compute-0 podman[246994]: 2025-10-02 19:19:44.148804699 +0000 UTC m=+0.034935660 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:19:44 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:19:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e18a603e30b4ec0417217639c37dabbd8a79acded9b1edab594f28fcb0a6bbc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 19:19:44 compute-0 podman[246994]: 2025-10-02 19:19:44.29725538 +0000 UTC m=+0.183386341 container init d0d7d635cce0eaeb97185ff58f5abf81d178d07dffcd12fe777a48b47de892ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 19:19:44 compute-0 podman[246994]: 2025-10-02 19:19:44.304681989 +0000 UTC m=+0.190812930 container start d0d7d635cce0eaeb97185ff58f5abf81d178d07dffcd12fe777a48b47de892ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 19:19:44 compute-0 neutron-haproxy-ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f[247009]: [NOTICE]   (247013) : New worker (247015) forked
Oct 02 19:19:44 compute-0 neutron-haproxy-ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f[247009]: [NOTICE]   (247013) : Loading success.
Oct 02 19:19:45 compute-0 nova_compute[194781]: 2025-10-02 19:19:45.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:46 compute-0 podman[247024]: 2025-10-02 19:19:46.721793979 +0000 UTC m=+0.096947705 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:19:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:47.453 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:19:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:47.454 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:19:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:19:47.455 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:19:48 compute-0 nova_compute[194781]: 2025-10-02 19:19:48.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:50 compute-0 nova_compute[194781]: 2025-10-02 19:19:50.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:53 compute-0 ovn_controller[97052]: 2025-10-02T19:19:53Z|00032|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:19:53 compute-0 nova_compute[194781]: 2025-10-02 19:19:53.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:53 compute-0 NetworkManager[52324]: <info>  [1759432793.8498] manager: (patch-br-int-to-provnet-fabcecd9-427f-4c39-a611-a0db39c03200): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Oct 02 19:19:53 compute-0 NetworkManager[52324]: <info>  [1759432793.8528] device (patch-br-int-to-provnet-fabcecd9-427f-4c39-a611-a0db39c03200)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 19:19:53 compute-0 NetworkManager[52324]: <info>  [1759432793.8576] manager: (patch-provnet-fabcecd9-427f-4c39-a611-a0db39c03200-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Oct 02 19:19:53 compute-0 NetworkManager[52324]: <info>  [1759432793.8596] device (patch-provnet-fabcecd9-427f-4c39-a611-a0db39c03200-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 19:19:53 compute-0 NetworkManager[52324]: <info>  [1759432793.8641] manager: (patch-br-int-to-provnet-fabcecd9-427f-4c39-a611-a0db39c03200): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Oct 02 19:19:53 compute-0 NetworkManager[52324]: <info>  [1759432793.8668] manager: (patch-provnet-fabcecd9-427f-4c39-a611-a0db39c03200-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 02 19:19:53 compute-0 NetworkManager[52324]: <info>  [1759432793.8690] device (patch-br-int-to-provnet-fabcecd9-427f-4c39-a611-a0db39c03200)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 19:19:53 compute-0 NetworkManager[52324]: <info>  [1759432793.8710] device (patch-provnet-fabcecd9-427f-4c39-a611-a0db39c03200-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 19:19:53 compute-0 nova_compute[194781]: 2025-10-02 19:19:53.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:53 compute-0 ovn_controller[97052]: 2025-10-02T19:19:53Z|00033|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:19:53 compute-0 nova_compute[194781]: 2025-10-02 19:19:53.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:53 compute-0 nova_compute[194781]: 2025-10-02 19:19:53.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:54 compute-0 podman[247049]: 2025-10-02 19:19:54.721995447 +0000 UTC m=+0.101641753 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:19:54 compute-0 podman[247050]: 2025-10-02 19:19:54.733609129 +0000 UTC m=+0.090911158 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:19:54 compute-0 nova_compute[194781]: 2025-10-02 19:19:54.745 2 DEBUG nova.compute.manager [req-6d8ddfea-972b-4a7c-8abb-80d7ce8eda58 req-5396b20c-6473-4d7c-b424-73f148327051 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Received event network-changed-db098052-6623-4e4a-9fb7-65b4006efb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:19:54 compute-0 nova_compute[194781]: 2025-10-02 19:19:54.745 2 DEBUG nova.compute.manager [req-6d8ddfea-972b-4a7c-8abb-80d7ce8eda58 req-5396b20c-6473-4d7c-b424-73f148327051 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Refreshing instance network info cache due to event network-changed-db098052-6623-4e4a-9fb7-65b4006efb6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:19:54 compute-0 nova_compute[194781]: 2025-10-02 19:19:54.745 2 DEBUG oslo_concurrency.lockutils [req-6d8ddfea-972b-4a7c-8abb-80d7ce8eda58 req-5396b20c-6473-4d7c-b424-73f148327051 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:19:54 compute-0 nova_compute[194781]: 2025-10-02 19:19:54.745 2 DEBUG oslo_concurrency.lockutils [req-6d8ddfea-972b-4a7c-8abb-80d7ce8eda58 req-5396b20c-6473-4d7c-b424-73f148327051 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:19:54 compute-0 nova_compute[194781]: 2025-10-02 19:19:54.746 2 DEBUG nova.network.neutron [req-6d8ddfea-972b-4a7c-8abb-80d7ce8eda58 req-5396b20c-6473-4d7c-b424-73f148327051 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Refreshing network info cache for port db098052-6623-4e4a-9fb7-65b4006efb6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:19:55 compute-0 nova_compute[194781]: 2025-10-02 19:19:55.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:56 compute-0 nova_compute[194781]: 2025-10-02 19:19:56.469 2 DEBUG nova.network.neutron [req-6d8ddfea-972b-4a7c-8abb-80d7ce8eda58 req-5396b20c-6473-4d7c-b424-73f148327051 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated VIF entry in instance network info cache for port db098052-6623-4e4a-9fb7-65b4006efb6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:19:56 compute-0 nova_compute[194781]: 2025-10-02 19:19:56.470 2 DEBUG nova.network.neutron [req-6d8ddfea-972b-4a7c-8abb-80d7ce8eda58 req-5396b20c-6473-4d7c-b424-73f148327051 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:19:56 compute-0 nova_compute[194781]: 2025-10-02 19:19:56.492 2 DEBUG oslo_concurrency.lockutils [req-6d8ddfea-972b-4a7c-8abb-80d7ce8eda58 req-5396b20c-6473-4d7c-b424-73f148327051 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:19:57 compute-0 podman[247090]: 2025-10-02 19:19:57.709826791 +0000 UTC m=+0.088672041 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, managed_by=edpm_ansible, release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0)
Oct 02 19:19:58 compute-0 nova_compute[194781]: 2025-10-02 19:19:58.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:19:59 compute-0 podman[247111]: 2025-10-02 19:19:59.707322996 +0000 UTC m=+0.073725661 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:19:59 compute-0 podman[247110]: 2025-10-02 19:19:59.726885472 +0000 UTC m=+0.096422680 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Oct 02 19:19:59 compute-0 podman[209015]: time="2025-10-02T19:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:19:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:19:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5157 "" "Go-http-client/1.1"
Oct 02 19:20:00 compute-0 nova_compute[194781]: 2025-10-02 19:20:00.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:01 compute-0 openstack_network_exporter[211160]: ERROR   19:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:20:01 compute-0 openstack_network_exporter[211160]: ERROR   19:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:20:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:20:01 compute-0 openstack_network_exporter[211160]: ERROR   19:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:20:01 compute-0 openstack_network_exporter[211160]: ERROR   19:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:20:01 compute-0 openstack_network_exporter[211160]: ERROR   19:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:20:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:20:03 compute-0 nova_compute[194781]: 2025-10-02 19:20:03.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:04 compute-0 podman[247150]: 2025-10-02 19:20:04.721489664 +0000 UTC m=+0.089919678 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:20:04 compute-0 podman[247149]: 2025-10-02 19:20:04.761990377 +0000 UTC m=+0.121190999 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:20:05 compute-0 nova_compute[194781]: 2025-10-02 19:20:05.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:08 compute-0 podman[247189]: 2025-10-02 19:20:08.707202029 +0000 UTC m=+0.068052894 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 02 19:20:08 compute-0 podman[247190]: 2025-10-02 19:20:08.730542907 +0000 UTC m=+0.100825910 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:20:08 compute-0 nova_compute[194781]: 2025-10-02 19:20:08.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:10 compute-0 nova_compute[194781]: 2025-10-02 19:20:10.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:11 compute-0 nova_compute[194781]: 2025-10-02 19:20:11.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.055 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.057 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.058 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.107 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.109 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.110 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.111 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.293 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.368 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.369 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.441 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.442 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.540 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.542 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:20:12 compute-0 nova_compute[194781]: 2025-10-02 19:20:12.641 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.201 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.203 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5269MB free_disk=72.53202056884766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.203 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.204 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.531 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.531 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.532 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.620 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing inventories for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.713 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating ProviderTree inventory for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.714 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.738 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing aggregate associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.770 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing trait associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,HW_CPU_X86_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.824 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.875 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updated inventory for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.876 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.876 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.913 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.914 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.915 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.915 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 19:20:13 compute-0 nova_compute[194781]: 2025-10-02 19:20:13.933 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 19:20:14 compute-0 nova_compute[194781]: 2025-10-02 19:20:14.912 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:20:14 compute-0 nova_compute[194781]: 2025-10-02 19:20:14.913 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:20:14 compute-0 nova_compute[194781]: 2025-10-02 19:20:14.914 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:20:15 compute-0 ovn_controller[97052]: 2025-10-02T19:20:15Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:88:9d 192.168.0.201
Oct 02 19:20:15 compute-0 ovn_controller[97052]: 2025-10-02T19:20:15Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:88:9d 192.168.0.201
Oct 02 19:20:15 compute-0 nova_compute[194781]: 2025-10-02 19:20:15.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:15 compute-0 nova_compute[194781]: 2025-10-02 19:20:15.219 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:20:15 compute-0 nova_compute[194781]: 2025-10-02 19:20:15.221 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:20:15 compute-0 nova_compute[194781]: 2025-10-02 19:20:15.221 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:20:15 compute-0 nova_compute[194781]: 2025-10-02 19:20:15.221 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:20:16 compute-0 nova_compute[194781]: 2025-10-02 19:20:16.203 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:20:16 compute-0 nova_compute[194781]: 2025-10-02 19:20:16.219 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:20:16 compute-0 nova_compute[194781]: 2025-10-02 19:20:16.221 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:20:16 compute-0 nova_compute[194781]: 2025-10-02 19:20:16.222 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:20:16 compute-0 nova_compute[194781]: 2025-10-02 19:20:16.223 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:20:16 compute-0 nova_compute[194781]: 2025-10-02 19:20:16.225 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:20:16 compute-0 nova_compute[194781]: 2025-10-02 19:20:16.227 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:20:16 compute-0 nova_compute[194781]: 2025-10-02 19:20:16.228 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:20:16 compute-0 nova_compute[194781]: 2025-10-02 19:20:16.346 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:20:17 compute-0 nova_compute[194781]: 2025-10-02 19:20:17.028 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:20:17 compute-0 podman[247263]: 2025-10-02 19:20:17.734219783 +0000 UTC m=+0.096447851 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:20:18 compute-0 nova_compute[194781]: 2025-10-02 19:20:18.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:20:18 compute-0 nova_compute[194781]: 2025-10-02 19:20:18.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 19:20:18 compute-0 nova_compute[194781]: 2025-10-02 19:20:18.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:20 compute-0 nova_compute[194781]: 2025-10-02 19:20:20.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:23 compute-0 ovn_controller[97052]: 2025-10-02T19:20:23Z|00034|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Oct 02 19:20:23 compute-0 nova_compute[194781]: 2025-10-02 19:20:23.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:24 compute-0 PackageKit[242823]: daemon quit
Oct 02 19:20:24 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 19:20:25 compute-0 nova_compute[194781]: 2025-10-02 19:20:25.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:25 compute-0 podman[247287]: 2025-10-02 19:20:25.741654245 +0000 UTC m=+0.103264772 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:20:25 compute-0 podman[247288]: 2025-10-02 19:20:25.75642021 +0000 UTC m=+0.108314681 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:20:28 compute-0 podman[247326]: 2025-10-02 19:20:28.746542351 +0000 UTC m=+0.103966202 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, io.openshift.tags=base rhel9, name=ubi9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Oct 02 19:20:28 compute-0 nova_compute[194781]: 2025-10-02 19:20:28.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:29 compute-0 podman[209015]: time="2025-10-02T19:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:20:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:20:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5161 "" "Go-http-client/1.1"
Oct 02 19:20:30 compute-0 nova_compute[194781]: 2025-10-02 19:20:30.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:30 compute-0 podman[247344]: 2025-10-02 19:20:30.701993277 +0000 UTC m=+0.079647096 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:20:30 compute-0 podman[247343]: 2025-10-02 19:20:30.722734148 +0000 UTC m=+0.092418702 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Oct 02 19:20:31 compute-0 openstack_network_exporter[211160]: ERROR   19:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:20:31 compute-0 openstack_network_exporter[211160]: ERROR   19:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:20:31 compute-0 openstack_network_exporter[211160]: ERROR   19:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:20:31 compute-0 openstack_network_exporter[211160]: ERROR   19:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:20:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:20:31 compute-0 openstack_network_exporter[211160]: ERROR   19:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:20:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:20:33 compute-0 nova_compute[194781]: 2025-10-02 19:20:33.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:35 compute-0 nova_compute[194781]: 2025-10-02 19:20:35.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:35 compute-0 podman[247381]: 2025-10-02 19:20:35.734948749 +0000 UTC m=+0.098279275 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:20:35 compute-0 podman[247382]: 2025-10-02 19:20:35.76487171 +0000 UTC m=+0.122707784 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 19:20:37 compute-0 nova_compute[194781]: 2025-10-02 19:20:37.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:37 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:37.044 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:20:37 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:37.045 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:20:38 compute-0 nova_compute[194781]: 2025-10-02 19:20:38.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:39 compute-0 podman[247423]: 2025-10-02 19:20:39.707882648 +0000 UTC m=+0.088635350 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:20:39 compute-0 podman[247424]: 2025-10-02 19:20:39.723708924 +0000 UTC m=+0.100006785 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:20:40 compute-0 nova_compute[194781]: 2025-10-02 19:20:40.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:42 compute-0 nova_compute[194781]: 2025-10-02 19:20:42.774 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "bf3e67ac-baba-4747-bf94-df866e53bdf9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:42 compute-0 nova_compute[194781]: 2025-10-02 19:20:42.774 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:42 compute-0 nova_compute[194781]: 2025-10-02 19:20:42.792 2 DEBUG nova.compute.manager [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:20:42 compute-0 nova_compute[194781]: 2025-10-02 19:20:42.859 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:42 compute-0 nova_compute[194781]: 2025-10-02 19:20:42.860 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:42 compute-0 nova_compute[194781]: 2025-10-02 19:20:42.872 2 DEBUG nova.virt.hardware [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:20:42 compute-0 nova_compute[194781]: 2025-10-02 19:20:42.872 2 INFO nova.compute.claims [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:20:42 compute-0 nova_compute[194781]: 2025-10-02 19:20:42.993 2 DEBUG nova.compute.provider_tree [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.011 2 DEBUG nova.scheduler.client.report [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.036 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.037 2 DEBUG nova.compute.manager [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.107 2 DEBUG nova.compute.manager [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.107 2 DEBUG nova.network.neutron [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.126 2 INFO nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.164 2 DEBUG nova.compute.manager [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.249 2 DEBUG nova.compute.manager [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.251 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.251 2 INFO nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Creating image(s)
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.252 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "/var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.253 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.254 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.271 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.327 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.328 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.329 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.340 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.401 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.402 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d,backing_fmt=raw /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.451 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d,backing_fmt=raw /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.453 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.453 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.513 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.514 2 DEBUG nova.virt.disk.api [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Checking if we can resize image /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.515 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.575 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.576 2 DEBUG nova.virt.disk.api [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Cannot resize image /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.577 2 DEBUG nova.objects.instance [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lazy-loading 'migration_context' on Instance uuid bf3e67ac-baba-4747-bf94-df866e53bdf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.594 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "/var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.594 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.597 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.613 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.675 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.676 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.677 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.692 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.750 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.751 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.792 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.794 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.794 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.852 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.853 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.854 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Ensure instance console log exists: /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.854 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.855 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.855 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:43 compute-0 nova_compute[194781]: 2025-10-02 19:20:43.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:44 compute-0 nova_compute[194781]: 2025-10-02 19:20:44.155 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:20:44 compute-0 nova_compute[194781]: 2025-10-02 19:20:44.192 2 WARNING nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.
Oct 02 19:20:44 compute-0 nova_compute[194781]: 2025-10-02 19:20:44.193 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Triggering sync for uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 19:20:44 compute-0 nova_compute[194781]: 2025-10-02 19:20:44.193 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Triggering sync for uuid bf3e67ac-baba-4747-bf94-df866e53bdf9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 19:20:44 compute-0 nova_compute[194781]: 2025-10-02 19:20:44.194 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:44 compute-0 nova_compute[194781]: 2025-10-02 19:20:44.195 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:44 compute-0 nova_compute[194781]: 2025-10-02 19:20:44.195 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "bf3e67ac-baba-4747-bf94-df866e53bdf9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:44 compute-0 nova_compute[194781]: 2025-10-02 19:20:44.245 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:44 compute-0 nova_compute[194781]: 2025-10-02 19:20:44.976 2 DEBUG nova.network.neutron [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Successfully updated port: dff3ea95-fab2-4bcb-9315-6a89cf30ad89 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:20:44 compute-0 nova_compute[194781]: 2025-10-02 19:20:44.998 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:20:44 compute-0 nova_compute[194781]: 2025-10-02 19:20:44.999 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquired lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:20:44 compute-0 nova_compute[194781]: 2025-10-02 19:20:44.999 2 DEBUG nova.network.neutron [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:20:45 compute-0 nova_compute[194781]: 2025-10-02 19:20:45.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:45 compute-0 nova_compute[194781]: 2025-10-02 19:20:45.162 2 DEBUG nova.network.neutron [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:20:45 compute-0 nova_compute[194781]: 2025-10-02 19:20:45.493 2 DEBUG nova.compute.manager [req-027e8403-9a75-453a-a7ad-e361e054bcd8 req-433b2533-deea-4dc1-91d8-cb3c66615638 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Received event network-changed-dff3ea95-fab2-4bcb-9315-6a89cf30ad89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:20:45 compute-0 nova_compute[194781]: 2025-10-02 19:20:45.493 2 DEBUG nova.compute.manager [req-027e8403-9a75-453a-a7ad-e361e054bcd8 req-433b2533-deea-4dc1-91d8-cb3c66615638 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Refreshing instance network info cache due to event network-changed-dff3ea95-fab2-4bcb-9315-6a89cf30ad89. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:20:45 compute-0 nova_compute[194781]: 2025-10-02 19:20:45.494 2 DEBUG oslo_concurrency.lockutils [req-027e8403-9a75-453a-a7ad-e361e054bcd8 req-433b2533-deea-4dc1-91d8-cb3c66615638 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.594 2 DEBUG nova.network.neutron [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Updating instance_info_cache with network_info: [{"id": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "address": "fa:16:3e:28:95:b6", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdff3ea95-fa", "ovs_interfaceid": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.616 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Releasing lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.617 2 DEBUG nova.compute.manager [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Instance network_info: |[{"id": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "address": "fa:16:3e:28:95:b6", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdff3ea95-fa", "ovs_interfaceid": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.618 2 DEBUG oslo_concurrency.lockutils [req-027e8403-9a75-453a-a7ad-e361e054bcd8 req-433b2533-deea-4dc1-91d8-cb3c66615638 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.619 2 DEBUG nova.network.neutron [req-027e8403-9a75-453a-a7ad-e361e054bcd8 req-433b2533-deea-4dc1-91d8-cb3c66615638 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Refreshing network info cache for port dff3ea95-fab2-4bcb-9315-6a89cf30ad89 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.625 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Start _get_guest_xml network_info=[{"id": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "address": "fa:16:3e:28:95:b6", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdff3ea95-fa", "ovs_interfaceid": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:18:19Z,direct_url=<?>,disk_format='qcow2',id=2c6780ee-8ca6-4dab-831c-c89907768547,min_disk=0,min_ram=0,name='cirros',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '2c6780ee-8ca6-4dab-831c-c89907768547'}], 'ephemerals': [{'encrypted': False, 'size': 1, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encryption_options': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.635 2 WARNING nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.645 2 DEBUG nova.virt.libvirt.host [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.646 2 DEBUG nova.virt.libvirt.host [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.654 2 DEBUG nova.virt.libvirt.host [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.655 2 DEBUG nova.virt.libvirt.host [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.655 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.656 2 DEBUG nova.virt.hardware [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:18:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='9b897399-e7fe-4a3e-9cc1-c1f819a27557',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:18:19Z,direct_url=<?>,disk_format='qcow2',id=2c6780ee-8ca6-4dab-831c-c89907768547,min_disk=0,min_ram=0,name='cirros',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.656 2 DEBUG nova.virt.hardware [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.657 2 DEBUG nova.virt.hardware [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.657 2 DEBUG nova.virt.hardware [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.657 2 DEBUG nova.virt.hardware [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.658 2 DEBUG nova.virt.hardware [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.658 2 DEBUG nova.virt.hardware [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.659 2 DEBUG nova.virt.hardware [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.659 2 DEBUG nova.virt.hardware [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.659 2 DEBUG nova.virt.hardware [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.660 2 DEBUG nova.virt.hardware [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.664 2 DEBUG nova.virt.libvirt.vif [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:20:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6',id=2,image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1264e536-3255-4eb3-9284-12888e889ce8'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6bd7784161a4cc3a2e8715feee92228',ramdisk_id='',reservation_id='r-gmmcx4ea',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:20:43Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yNzM3NjU4MzczMzI4NTM3NTY3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI3Mzc2NTgzNzMzMjg1Mzc1Njc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjczNzY1ODM3MzMyODUzNzU2Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI3Mzc2NTgzNzMzMjg1Mzc1Njc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yNzM3NjU4MzczMzI4NTM3NTY3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yNzM3NjU4MzczMzI4NTM3NTY3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3
Oct 02 19:20:46 compute-0 nova_compute[194781]: Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjczNzY1ODM3MzMyODUzNzU2Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI3Mzc2NTgzNzMzMjg1Mzc1Njc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yNzM3NjU4MzczMzI4NTM3NTY3PT0tLQo=',user_id='5e0565a40c4e40f9ab77ce190f9527c5',uuid=bf3e67ac-baba-4747-bf94-df866e53bdf9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "address": "fa:16:3e:28:95:b6", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdff3ea95-fa", "ovs_interfaceid": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.664 2 DEBUG nova.network.os_vif_util [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converting VIF {"id": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "address": "fa:16:3e:28:95:b6", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdff3ea95-fa", "ovs_interfaceid": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.665 2 DEBUG nova.network.os_vif_util [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:95:b6,bridge_name='br-int',has_traffic_filtering=True,id=dff3ea95-fab2-4bcb-9315-6a89cf30ad89,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdff3ea95-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.666 2 DEBUG nova.objects.instance [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lazy-loading 'pci_devices' on Instance uuid bf3e67ac-baba-4747-bf94-df866e53bdf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.684 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:20:46 compute-0 nova_compute[194781]:   <uuid>bf3e67ac-baba-4747-bf94-df866e53bdf9</uuid>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   <name>instance-00000002</name>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   <memory>524288</memory>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <nova:name>vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6</nova:name>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:20:46</nova:creationTime>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <nova:flavor name="m1.small">
Oct 02 19:20:46 compute-0 nova_compute[194781]:         <nova:memory>512</nova:memory>
Oct 02 19:20:46 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:20:46 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:20:46 compute-0 nova_compute[194781]:         <nova:ephemeral>1</nova:ephemeral>
Oct 02 19:20:46 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:20:46 compute-0 nova_compute[194781]:         <nova:user uuid="5e0565a40c4e40f9ab77ce190f9527c5">admin</nova:user>
Oct 02 19:20:46 compute-0 nova_compute[194781]:         <nova:project uuid="c6bd7784161a4cc3a2e8715feee92228">admin</nova:project>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="2c6780ee-8ca6-4dab-831c-c89907768547"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:20:46 compute-0 nova_compute[194781]:         <nova:port uuid="dff3ea95-fab2-4bcb-9315-6a89cf30ad89">
Oct 02 19:20:46 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="192.168.0.239" ipVersion="4"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <system>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <entry name="serial">bf3e67ac-baba-4747-bf94-df866e53bdf9</entry>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <entry name="uuid">bf3e67ac-baba-4747-bf94-df866e53bdf9</entry>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     </system>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   <os>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   </os>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   <features>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   </features>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <target dev="vdb" bus="virtio"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.config"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:28:95:b6"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <target dev="tapdff3ea95-fa"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/console.log" append="off"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <video>
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     </video>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:20:46 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:20:46 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:20:46 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:20:46 compute-0 nova_compute[194781]: </domain>
Oct 02 19:20:46 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.685 2 DEBUG nova.compute.manager [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Preparing to wait for external event network-vif-plugged-dff3ea95-fab2-4bcb-9315-6a89cf30ad89 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.686 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.686 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.687 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.688 2 DEBUG nova.virt.libvirt.vif [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:20:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6',id=2,image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1264e536-3255-4eb3-9284-12888e889ce8'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6bd7784161a4cc3a2e8715feee92228',ramdisk_id='',reservation_id='r-gmmcx4ea',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:20:43Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yNzM3NjU4MzczMzI4NTM3NTY3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI3Mzc2NTgzNzMzMjg1Mzc1Njc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjczNzY1ODM3MzMyODUzNzU2Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI3Mzc2NTgzNzMzMjg1Mzc1Njc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yNzM3NjU4MzczMzI4NTM3NTY3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yNzM3NjU4MzczMzI4NTM3NTY3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4o
Oct 02 19:20:46 compute-0 nova_compute[194781]: YXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjczNzY1ODM3MzMyODUzNzU2Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI3Mzc2NTgzNzMzMjg1Mzc1Njc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yNzM3NjU4MzczMzI4NTM3NTY3PT0tLQo=',user_id='5e0565a40c4e40f9ab77ce190f9527c5',uuid=bf3e67ac-baba-4747-bf94-df866e53bdf9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "address": "fa:16:3e:28:95:b6", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdff3ea95-fa", "ovs_interfaceid": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.688 2 DEBUG nova.network.os_vif_util [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converting VIF {"id": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "address": "fa:16:3e:28:95:b6", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdff3ea95-fa", "ovs_interfaceid": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.689 2 DEBUG nova.network.os_vif_util [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:95:b6,bridge_name='br-int',has_traffic_filtering=True,id=dff3ea95-fab2-4bcb-9315-6a89cf30ad89,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdff3ea95-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.690 2 DEBUG os_vif [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:95:b6,bridge_name='br-int',has_traffic_filtering=True,id=dff3ea95-fab2-4bcb-9315-6a89cf30ad89,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdff3ea95-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.697 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdff3ea95-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.697 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdff3ea95-fa, col_values=(('external_ids', {'iface-id': 'dff3ea95-fab2-4bcb-9315-6a89cf30ad89', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:95:b6', 'vm-uuid': 'bf3e67ac-baba-4747-bf94-df866e53bdf9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:20:46 compute-0 NetworkManager[52324]: <info>  [1759432846.7008] manager: (tapdff3ea95-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.712 2 INFO os_vif [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:95:b6,bridge_name='br-int',has_traffic_filtering=True,id=dff3ea95-fab2-4bcb-9315-6a89cf30ad89,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdff3ea95-fa')
Oct 02 19:20:46 compute-0 rsyslogd[243731]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:20:46.664 2 DEBUG nova.virt.libvirt.vif [None req-3caa97b8-becd-4e [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.781 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.782 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.783 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.783 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No VIF found with MAC fa:16:3e:28:95:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:20:46 compute-0 nova_compute[194781]: 2025-10-02 19:20:46.785 2 INFO nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Using config drive
Oct 02 19:20:46 compute-0 rsyslogd[243731]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:20:46.688 2 DEBUG nova.virt.libvirt.vif [None req-3caa97b8-becd-4e [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.048 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:20:47 compute-0 nova_compute[194781]: 2025-10-02 19:20:47.410 2 INFO nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Creating config drive at /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.config
Oct 02 19:20:47 compute-0 nova_compute[194781]: 2025-10-02 19:20:47.419 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ufsyrg6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.454 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.455 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.455 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:47 compute-0 nova_compute[194781]: 2025-10-02 19:20:47.548 2 DEBUG oslo_concurrency.processutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ufsyrg6" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:20:47 compute-0 kernel: tapdff3ea95-fa: entered promiscuous mode
Oct 02 19:20:47 compute-0 NetworkManager[52324]: <info>  [1759432847.6197] manager: (tapdff3ea95-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Oct 02 19:20:47 compute-0 nova_compute[194781]: 2025-10-02 19:20:47.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:47 compute-0 ovn_controller[97052]: 2025-10-02T19:20:47Z|00035|binding|INFO|Claiming lport dff3ea95-fab2-4bcb-9315-6a89cf30ad89 for this chassis.
Oct 02 19:20:47 compute-0 ovn_controller[97052]: 2025-10-02T19:20:47Z|00036|binding|INFO|dff3ea95-fab2-4bcb-9315-6a89cf30ad89: Claiming fa:16:3e:28:95:b6 192.168.0.239
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.629 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:95:b6 192.168.0.239'], port_security=['fa:16:3e:28:95:b6 192.168.0.239'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-kewzjvdnt5lz-ntmph4mpmrke-cvcdtekesgtz-port-adgemvcynrcg', 'neutron:cidrs': '192.168.0.239/24', 'neutron:device_id': 'bf3e67ac-baba-4747-bf94-df866e53bdf9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5760fda-9195-4e68-8506-4362bf1edf4f', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-kewzjvdnt5lz-ntmph4mpmrke-cvcdtekesgtz-port-adgemvcynrcg', 'neutron:project_id': 'c6bd7784161a4cc3a2e8715feee92228', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72aaa87c-2798-4a9c-ab16-34693e3fe341', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21963977-c089-41a8-8d06-e659a781ceff, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=dff3ea95-fab2-4bcb-9315-6a89cf30ad89) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.630 105943 INFO neutron.agent.ovn.metadata.agent [-] Port dff3ea95-fab2-4bcb-9315-6a89cf30ad89 in datapath b5760fda-9195-4e68-8506-4362bf1edf4f bound to our chassis
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.631 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5760fda-9195-4e68-8506-4362bf1edf4f
Oct 02 19:20:47 compute-0 ovn_controller[97052]: 2025-10-02T19:20:47Z|00037|binding|INFO|Setting lport dff3ea95-fab2-4bcb-9315-6a89cf30ad89 ovn-installed in OVS
Oct 02 19:20:47 compute-0 ovn_controller[97052]: 2025-10-02T19:20:47Z|00038|binding|INFO|Setting lport dff3ea95-fab2-4bcb-9315-6a89cf30ad89 up in Southbound
Oct 02 19:20:47 compute-0 nova_compute[194781]: 2025-10-02 19:20:47.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.649 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[11001eac-fcbd-4f73-9be8-5fbc2974bf13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:20:47 compute-0 systemd-machined[154795]: New machine qemu-2-instance-00000002.
Oct 02 19:20:47 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.680 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[8ecdedc4-59da-44c8-8308-49134b775a6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.685 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2b632e-cc7a-447a-9638-2cd19b299813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:20:47 compute-0 systemd-udevd[247516]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.713 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[82373f55-64f6-440d-80cf-d5ff0d82397d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:20:47 compute-0 NetworkManager[52324]: <info>  [1759432847.7154] device (tapdff3ea95-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:20:47 compute-0 NetworkManager[52324]: <info>  [1759432847.7211] device (tapdff3ea95-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.750 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6a5cea-98ca-4055-b466-1f0a1aee0fc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5760fda-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:0b:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394420, 'reachable_time': 16487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247526, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.785 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[ec427c48-c144-4aa5-900e-8a5a98fd4b68]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapb5760fda-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394432, 'tstamp': 394432}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247528, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5760fda-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394434, 'tstamp': 394434}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247528, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.788 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5760fda-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:20:47 compute-0 nova_compute[194781]: 2025-10-02 19:20:47.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:47 compute-0 nova_compute[194781]: 2025-10-02 19:20:47.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.794 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5760fda-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.795 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.796 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5760fda-90, col_values=(('external_ids', {'iface-id': '8a91c2ef-c369-46ce-8154-e9505f04ef0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:20:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:20:47.796 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.345 2 DEBUG nova.compute.manager [req-13ef4a96-9ee7-4ee3-92e3-467d9464a9f4 req-469b1100-1209-429c-861d-d50f158ff4ea fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Received event network-vif-plugged-dff3ea95-fab2-4bcb-9315-6a89cf30ad89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.346 2 DEBUG oslo_concurrency.lockutils [req-13ef4a96-9ee7-4ee3-92e3-467d9464a9f4 req-469b1100-1209-429c-861d-d50f158ff4ea fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.346 2 DEBUG oslo_concurrency.lockutils [req-13ef4a96-9ee7-4ee3-92e3-467d9464a9f4 req-469b1100-1209-429c-861d-d50f158ff4ea fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.347 2 DEBUG oslo_concurrency.lockutils [req-13ef4a96-9ee7-4ee3-92e3-467d9464a9f4 req-469b1100-1209-429c-861d-d50f158ff4ea fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.347 2 DEBUG nova.compute.manager [req-13ef4a96-9ee7-4ee3-92e3-467d9464a9f4 req-469b1100-1209-429c-861d-d50f158ff4ea fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Processing event network-vif-plugged-dff3ea95-fab2-4bcb-9315-6a89cf30ad89 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:20:48 compute-0 podman[247536]: 2025-10-02 19:20:48.700662593 +0000 UTC m=+0.071896038 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.729 2 DEBUG nova.network.neutron [req-027e8403-9a75-453a-a7ad-e361e054bcd8 req-433b2533-deea-4dc1-91d8-cb3c66615638 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Updated VIF entry in instance network info cache for port dff3ea95-fab2-4bcb-9315-6a89cf30ad89. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.730 2 DEBUG nova.network.neutron [req-027e8403-9a75-453a-a7ad-e361e054bcd8 req-433b2533-deea-4dc1-91d8-cb3c66615638 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Updating instance_info_cache with network_info: [{"id": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "address": "fa:16:3e:28:95:b6", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdff3ea95-fa", "ovs_interfaceid": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.733 2 DEBUG nova.compute.manager [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.734 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759432848.7334778, bf3e67ac-baba-4747-bf94-df866e53bdf9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.734 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] VM Started (Lifecycle Event)
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.737 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.741 2 INFO nova.virt.libvirt.driver [-] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Instance spawned successfully.
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.742 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.766 2 DEBUG oslo_concurrency.lockutils [req-027e8403-9a75-453a-a7ad-e361e054bcd8 req-433b2533-deea-4dc1-91d8-cb3c66615638 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.770 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.776 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.808 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.809 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759432848.7335982, bf3e67ac-baba-4747-bf94-df866e53bdf9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.809 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] VM Paused (Lifecycle Event)
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.813 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.814 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.814 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.814 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.815 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.815 2 DEBUG nova.virt.libvirt.driver [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.839 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.844 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759432848.7366898, bf3e67ac-baba-4747-bf94-df866e53bdf9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.844 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] VM Resumed (Lifecycle Event)
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.866 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.871 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.876 2 INFO nova.compute.manager [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Took 5.63 seconds to spawn the instance on the hypervisor.
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.876 2 DEBUG nova.compute.manager [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.902 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.945 2 INFO nova.compute.manager [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Took 6.11 seconds to build instance.
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.963 2 DEBUG oslo_concurrency.lockutils [None req-3caa97b8-becd-4e5a-b2b7-fe131bbcc8d1 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.963 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.964 2 INFO nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:20:48 compute-0 nova_compute[194781]: 2025-10-02 19:20:48.964 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:50 compute-0 nova_compute[194781]: 2025-10-02 19:20:50.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:50 compute-0 nova_compute[194781]: 2025-10-02 19:20:50.457 2 DEBUG nova.compute.manager [req-e649191d-0255-4226-9c5f-1c026e3c148b req-84f8045f-f7d2-4dd0-8d81-2ddae5111819 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Received event network-vif-plugged-dff3ea95-fab2-4bcb-9315-6a89cf30ad89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:20:50 compute-0 nova_compute[194781]: 2025-10-02 19:20:50.458 2 DEBUG oslo_concurrency.lockutils [req-e649191d-0255-4226-9c5f-1c026e3c148b req-84f8045f-f7d2-4dd0-8d81-2ddae5111819 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:20:50 compute-0 nova_compute[194781]: 2025-10-02 19:20:50.458 2 DEBUG oslo_concurrency.lockutils [req-e649191d-0255-4226-9c5f-1c026e3c148b req-84f8045f-f7d2-4dd0-8d81-2ddae5111819 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:20:50 compute-0 nova_compute[194781]: 2025-10-02 19:20:50.458 2 DEBUG oslo_concurrency.lockutils [req-e649191d-0255-4226-9c5f-1c026e3c148b req-84f8045f-f7d2-4dd0-8d81-2ddae5111819 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:20:50 compute-0 nova_compute[194781]: 2025-10-02 19:20:50.459 2 DEBUG nova.compute.manager [req-e649191d-0255-4226-9c5f-1c026e3c148b req-84f8045f-f7d2-4dd0-8d81-2ddae5111819 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] No waiting events found dispatching network-vif-plugged-dff3ea95-fab2-4bcb-9315-6a89cf30ad89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:20:50 compute-0 nova_compute[194781]: 2025-10-02 19:20:50.459 2 WARNING nova.compute.manager [req-e649191d-0255-4226-9c5f-1c026e3c148b req-84f8045f-f7d2-4dd0-8d81-2ddae5111819 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Received unexpected event network-vif-plugged-dff3ea95-fab2-4bcb-9315-6a89cf30ad89 for instance with vm_state active and task_state None.
Oct 02 19:20:51 compute-0 nova_compute[194781]: 2025-10-02 19:20:51.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:55 compute-0 nova_compute[194781]: 2025-10-02 19:20:55.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:56 compute-0 nova_compute[194781]: 2025-10-02 19:20:56.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:20:56 compute-0 podman[247563]: 2025-10-02 19:20:56.717546323 +0000 UTC m=+0.083959443 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:20:56 compute-0 podman[247564]: 2025-10-02 19:20:56.76634752 +0000 UTC m=+0.128771683 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 19:20:59 compute-0 podman[247603]: 2025-10-02 19:20:59.734767302 +0000 UTC m=+0.097459410 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, container_name=kepler, com.redhat.component=ubi9-container, vcs-type=git, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64)
Oct 02 19:20:59 compute-0 podman[209015]: time="2025-10-02T19:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:20:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:20:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5160 "" "Go-http-client/1.1"
Oct 02 19:21:00 compute-0 nova_compute[194781]: 2025-10-02 19:21:00.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:01 compute-0 openstack_network_exporter[211160]: ERROR   19:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:21:01 compute-0 openstack_network_exporter[211160]: ERROR   19:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:21:01 compute-0 openstack_network_exporter[211160]: ERROR   19:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:21:01 compute-0 openstack_network_exporter[211160]: ERROR   19:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:21:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:21:01 compute-0 openstack_network_exporter[211160]: ERROR   19:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:21:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:21:01 compute-0 nova_compute[194781]: 2025-10-02 19:21:01.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:01 compute-0 podman[247623]: 2025-10-02 19:21:01.714645189 +0000 UTC m=+0.086483548 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 19:21:01 compute-0 podman[247622]: 2025-10-02 19:21:01.733760412 +0000 UTC m=+0.095166764 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 02 19:21:05 compute-0 nova_compute[194781]: 2025-10-02 19:21:05.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:06 compute-0 nova_compute[194781]: 2025-10-02 19:21:06.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:06 compute-0 podman[247660]: 2025-10-02 19:21:06.736490802 +0000 UTC m=+0.096020928 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:21:06 compute-0 podman[247659]: 2025-10-02 19:21:06.739047417 +0000 UTC m=+0.108959429 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:21:10 compute-0 nova_compute[194781]: 2025-10-02 19:21:10.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:10 compute-0 podman[247703]: 2025-10-02 19:21:10.729642077 +0000 UTC m=+0.092737402 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:21:10 compute-0 podman[247704]: 2025-10-02 19:21:10.792639832 +0000 UTC m=+0.155975924 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:21:11 compute-0 nova_compute[194781]: 2025-10-02 19:21:11.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.938 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.939 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.939 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.939 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:21:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:12.945 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance bf3e67ac-baba-4747-bf94-df866e53bdf9 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 19:21:13 compute-0 nova_compute[194781]: 2025-10-02 19:21:13.074 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:21:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:13.400 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/bf3e67ac-baba-4747-bf94-df866e53bdf9 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}7d00fd7b3129404772d7b3eeaef94222e4d12fdb730378deac028178d031ce80" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.054 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.055 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.055 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.055 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.130 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.188 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.189 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.255 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.257 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.322 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.323 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.380 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:21:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:14.388 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Thu, 02 Oct 2025 19:21:13 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d8540d88-35fb-4b9d-b920-ab5d67cfbae9 x-openstack-request-id: req-d8540d88-35fb-4b9d-b920-ab5d67cfbae9 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 19:21:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:14.388 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "bf3e67ac-baba-4747-bf94-df866e53bdf9", "name": "vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6", "status": "ACTIVE", "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "user_id": "5e0565a40c4e40f9ab77ce190f9527c5", "metadata": {"metering.server_group": "1264e536-3255-4eb3-9284-12888e889ce8"}, "hostId": "536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2", "image": {"id": "2c6780ee-8ca6-4dab-831c-c89907768547", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/2c6780ee-8ca6-4dab-831c-c89907768547"}]}, "flavor": {"id": "9b897399-e7fe-4a3e-9cc1-c1f819a27557", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/9b897399-e7fe-4a3e-9cc1-c1f819a27557"}]}, "created": "2025-10-02T19:20:40Z", "updated": "2025-10-02T19:20:48Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.239", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:28:95:b6"}, {"version": 4, "addr": "192.168.122.238", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:28:95:b6"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/bf3e67ac-baba-4747-bf94-df866e53bdf9"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/bf3e67ac-baba-4747-bf94-df866e53bdf9"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-02T19:20:48.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 19:21:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:14.388 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/bf3e67ac-baba-4747-bf94-df866e53bdf9 used request id req-d8540d88-35fb-4b9d-b920-ab5d67cfbae9 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.388 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:21:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:14.389 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf3e67ac-baba-4747-bf94-df866e53bdf9', 'name': 'vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {'metering.server_group': '1264e536-3255-4eb3-9284-12888e889ce8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:21:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:14.392 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 19:21:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:14.392 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/7aab78e5-2ff6-460d-87d6-f4c21f2d4403 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}7d00fd7b3129404772d7b3eeaef94222e4d12fdb730378deac028178d031ce80" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.447 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.448 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.512 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.514 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.574 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.575 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.646 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.980 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.982 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5134MB free_disk=72.51006698608398GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.983 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:21:14 compute-0 nova_compute[194781]: 2025-10-02 19:21:14.983 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:21:15 compute-0 nova_compute[194781]: 2025-10-02 19:21:15.074 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:21:15 compute-0 nova_compute[194781]: 2025-10-02 19:21:15.075 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance bf3e67ac-baba-4747-bf94-df866e53bdf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:21:15 compute-0 nova_compute[194781]: 2025-10-02 19:21:15.075 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:21:15 compute-0 nova_compute[194781]: 2025-10-02 19:21:15.076 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:21:15 compute-0 nova_compute[194781]: 2025-10-02 19:21:15.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:15 compute-0 nova_compute[194781]: 2025-10-02 19:21:15.138 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:21:15 compute-0 nova_compute[194781]: 2025-10-02 19:21:15.153 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:21:15 compute-0 nova_compute[194781]: 2025-10-02 19:21:15.174 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:21:15 compute-0 nova_compute[194781]: 2025-10-02 19:21:15.175 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.230 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Thu, 02 Oct 2025 19:21:14 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-276eca21-9f3b-48c8-9368-f767114e6534 x-openstack-request-id: req-276eca21-9f3b-48c8-9368-f767114e6534 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.230 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "7aab78e5-2ff6-460d-87d6-f4c21f2d4403", "name": "test_0", "status": "ACTIVE", "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "user_id": "5e0565a40c4e40f9ab77ce190f9527c5", "metadata": {}, "hostId": "536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2", "image": {"id": "2c6780ee-8ca6-4dab-831c-c89907768547", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/2c6780ee-8ca6-4dab-831c-c89907768547"}]}, "flavor": {"id": "9b897399-e7fe-4a3e-9cc1-c1f819a27557", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/9b897399-e7fe-4a3e-9cc1-c1f819a27557"}]}, "created": "2025-10-02T19:19:28Z", "updated": "2025-10-02T19:19:41Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.201", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:85:88:9d"}, {"version": 4, "addr": "192.168.122.208", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:85:88:9d"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/7aab78e5-2ff6-460d-87d6-f4c21f2d4403"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/7aab78e5-2ff6-460d-87d6-f4c21f2d4403"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-02T19:19:41.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.231 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/7aab78e5-2ff6-460d-87d6-f4c21f2d4403 used request id req-276eca21-9f3b-48c8-9368-f767114e6534 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.232 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.232 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.232 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.233 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.233 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.234 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:21:15.233244) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.259 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/cpu volume: 26120000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.287 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 34280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.289 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.289 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.289 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.289 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.290 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance bf3e67ac-baba-4747-bf94-df866e53bdf9: ceilometer.compute.pollsters.NoVolumeException
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.290 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.290 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:21:15.289478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.291 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.291 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.292 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:21:15.292169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.292 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.296 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for bf3e67ac-baba-4747-bf94-df866e53bdf9 / tapdff3ea95-fa inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.296 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.299 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 / tapdb098052-66 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.300 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.300 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.301 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.bytes volume: 110 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.301 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2268 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.301 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.302 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:21:15.301149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:21:15.302878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.303 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.303 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.303 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.304 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.304 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.304 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:21:15.304289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.304 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.305 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.305 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.305 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.305 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.305 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:21:15.306042) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.306 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.306 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.307 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.307 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.307 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-02T19:21:15.307528) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.307 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6>, <NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6>, <NovaLikeServer: test_0>]
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.309 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.309 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:21:15.309444) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.355 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.356 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.356 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.404 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.405 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.405 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.405 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.406 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.406 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.406 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:21:15.406519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.407 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.408 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.408 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.408 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2062 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:21:15.408415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.409 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.410 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:21:15.410318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.432 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.432 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.433 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.464 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.465 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.465 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.466 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.466 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.466 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.466 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 492615023 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.466 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.467 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 3313878 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.467 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.467 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.467 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.468 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.468 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.468 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:21:15.466549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-02T19:21:15.468615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.468 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.469 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6>, <NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6>, <NovaLikeServer: test_0>]
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.469 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.470 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.470 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.470 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.470 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.470 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.471 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.471 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.471 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.471 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.472 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.472 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.472 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:21:15.469969) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:21:15.472072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.473 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.473 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.473 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.473 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.473 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.473 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.473 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.473 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.474 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.474 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:21:15.473541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.474 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.474 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.475 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.475 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.475 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.475 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.475 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.475 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.475 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.475 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:21:15.475778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.476 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.476 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.476 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.476 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.476 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.477 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.477 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.477 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.477 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.477 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.477 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.477 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.478 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:21:15.477890) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.478 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.478 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.478 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.479 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.479 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.479 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.479 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.479 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.480 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.480 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:21:15.480207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.480 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.480 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.481 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.481 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.481 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.481 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.482 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.482 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.482 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:21:15.482495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.482 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.483 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.483 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.483 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.483 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.483 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.484 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.484 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.484 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.484 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.484 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:21:15.484504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.484 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.485 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.485 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.485 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.486 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.486 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.486 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:21:15.485773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.486 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.487 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.487 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.487 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:21:15.487107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.487 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.487 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.487 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.488 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.488 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.488 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.488 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.488 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.489 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:21:15.488143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.489 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.489 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.490 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.490 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.490 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.490 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.491 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.491 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.491 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.491 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.491 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.491 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.491 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.491 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.491 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.491 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.491 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.491 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.491 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.492 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:21:15.489536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.492 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.492 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.492 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.492 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.492 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.492 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.492 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.492 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:21:15.492 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:21:16 compute-0 nova_compute[194781]: 2025-10-02 19:21:16.175 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:21:16 compute-0 nova_compute[194781]: 2025-10-02 19:21:16.176 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:21:16 compute-0 nova_compute[194781]: 2025-10-02 19:21:16.176 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:21:16 compute-0 nova_compute[194781]: 2025-10-02 19:21:16.177 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:21:16 compute-0 nova_compute[194781]: 2025-10-02 19:21:16.593 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:21:16 compute-0 nova_compute[194781]: 2025-10-02 19:21:16.594 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:21:16 compute-0 nova_compute[194781]: 2025-10-02 19:21:16.595 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:21:16 compute-0 nova_compute[194781]: 2025-10-02 19:21:16.595 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:21:16 compute-0 nova_compute[194781]: 2025-10-02 19:21:16.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:17 compute-0 ovn_controller[97052]: 2025-10-02T19:21:17Z|00039|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Oct 02 19:21:18 compute-0 nova_compute[194781]: 2025-10-02 19:21:18.420 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:21:18 compute-0 nova_compute[194781]: 2025-10-02 19:21:18.463 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:21:18 compute-0 nova_compute[194781]: 2025-10-02 19:21:18.463 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:21:18 compute-0 nova_compute[194781]: 2025-10-02 19:21:18.464 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:21:18 compute-0 nova_compute[194781]: 2025-10-02 19:21:18.464 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:21:18 compute-0 nova_compute[194781]: 2025-10-02 19:21:18.465 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:21:18 compute-0 nova_compute[194781]: 2025-10-02 19:21:18.465 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:21:19 compute-0 podman[247772]: 2025-10-02 19:21:19.688252103 +0000 UTC m=+0.067210729 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:21:20 compute-0 nova_compute[194781]: 2025-10-02 19:21:20.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:21 compute-0 nova_compute[194781]: 2025-10-02 19:21:21.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:22 compute-0 ovn_controller[97052]: 2025-10-02T19:21:22Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:95:b6 192.168.0.239
Oct 02 19:21:22 compute-0 ovn_controller[97052]: 2025-10-02T19:21:22Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:95:b6 192.168.0.239
Oct 02 19:21:25 compute-0 nova_compute[194781]: 2025-10-02 19:21:25.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:26 compute-0 nova_compute[194781]: 2025-10-02 19:21:26.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:27 compute-0 podman[247808]: 2025-10-02 19:21:27.746282649 +0000 UTC m=+0.122453993 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:21:27 compute-0 podman[247809]: 2025-10-02 19:21:27.746912856 +0000 UTC m=+0.116095501 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:21:29 compute-0 podman[209015]: time="2025-10-02T19:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:21:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:21:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5160 "" "Go-http-client/1.1"
Oct 02 19:21:30 compute-0 nova_compute[194781]: 2025-10-02 19:21:30.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:30 compute-0 podman[247844]: 2025-10-02 19:21:30.695168712 +0000 UTC m=+0.071144335 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2024-09-18T21:23:30, container_name=kepler, version=9.4, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, architecture=x86_64, config_id=edpm, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct 02 19:21:31 compute-0 openstack_network_exporter[211160]: ERROR   19:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:21:31 compute-0 openstack_network_exporter[211160]: ERROR   19:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:21:31 compute-0 openstack_network_exporter[211160]: ERROR   19:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:21:31 compute-0 openstack_network_exporter[211160]: ERROR   19:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:21:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:21:31 compute-0 openstack_network_exporter[211160]: ERROR   19:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:21:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:21:31 compute-0 nova_compute[194781]: 2025-10-02 19:21:31.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:32 compute-0 podman[247865]: 2025-10-02 19:21:32.720879747 +0000 UTC m=+0.080655993 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:21:32 compute-0 podman[247864]: 2025-10-02 19:21:32.749299675 +0000 UTC m=+0.113443119 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7)
Oct 02 19:21:35 compute-0 nova_compute[194781]: 2025-10-02 19:21:35.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:36 compute-0 nova_compute[194781]: 2025-10-02 19:21:36.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:37 compute-0 podman[247902]: 2025-10-02 19:21:37.697799018 +0000 UTC m=+0.076534781 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:21:37 compute-0 podman[247903]: 2025-10-02 19:21:37.712911956 +0000 UTC m=+0.088125914 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 19:21:40 compute-0 nova_compute[194781]: 2025-10-02 19:21:40.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:41 compute-0 nova_compute[194781]: 2025-10-02 19:21:41.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:41 compute-0 podman[247945]: 2025-10-02 19:21:41.735765228 +0000 UTC m=+0.109143427 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:21:41 compute-0 podman[247946]: 2025-10-02 19:21:41.774274044 +0000 UTC m=+0.143883700 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:21:45 compute-0 nova_compute[194781]: 2025-10-02 19:21:45.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:46 compute-0 nova_compute[194781]: 2025-10-02 19:21:46.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:21:47.455 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:21:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:21:47.456 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:21:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:21:47.456 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:21:50 compute-0 nova_compute[194781]: 2025-10-02 19:21:50.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:50 compute-0 podman[247989]: 2025-10-02 19:21:50.740830808 +0000 UTC m=+0.094176280 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:21:51 compute-0 nova_compute[194781]: 2025-10-02 19:21:51.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:55 compute-0 nova_compute[194781]: 2025-10-02 19:21:55.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:56 compute-0 nova_compute[194781]: 2025-10-02 19:21:56.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:21:58 compute-0 podman[248013]: 2025-10-02 19:21:58.715687507 +0000 UTC m=+0.090237563 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:21:58 compute-0 podman[248014]: 2025-10-02 19:21:58.747251255 +0000 UTC m=+0.116659381 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Oct 02 19:21:59 compute-0 podman[209015]: time="2025-10-02T19:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:21:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:21:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5171 "" "Go-http-client/1.1"
Oct 02 19:22:00 compute-0 nova_compute[194781]: 2025-10-02 19:22:00.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:01 compute-0 openstack_network_exporter[211160]: ERROR   19:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:22:01 compute-0 openstack_network_exporter[211160]: ERROR   19:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:22:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:22:01 compute-0 openstack_network_exporter[211160]: ERROR   19:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:22:01 compute-0 openstack_network_exporter[211160]: ERROR   19:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:22:01 compute-0 openstack_network_exporter[211160]: ERROR   19:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:22:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:22:01 compute-0 nova_compute[194781]: 2025-10-02 19:22:01.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:01 compute-0 podman[248051]: 2025-10-02 19:22:01.755030391 +0000 UTC m=+0.124586756 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, architecture=x86_64)
Oct 02 19:22:03 compute-0 podman[248073]: 2025-10-02 19:22:03.737685962 +0000 UTC m=+0.102544977 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:22:03 compute-0 podman[248072]: 2025-10-02 19:22:03.75268929 +0000 UTC m=+0.116078345 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:22:05 compute-0 nova_compute[194781]: 2025-10-02 19:22:05.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:06 compute-0 nova_compute[194781]: 2025-10-02 19:22:06.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:08 compute-0 podman[248112]: 2025-10-02 19:22:08.72256896 +0000 UTC m=+0.088158236 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:22:08 compute-0 podman[248113]: 2025-10-02 19:22:08.751547878 +0000 UTC m=+0.109405324 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, tcib_managed=true)
Oct 02 19:22:10 compute-0 nova_compute[194781]: 2025-10-02 19:22:10.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:11 compute-0 nova_compute[194781]: 2025-10-02 19:22:11.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:12 compute-0 podman[248151]: 2025-10-02 19:22:12.738209492 +0000 UTC m=+0.105371504 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:22:12 compute-0 podman[248152]: 2025-10-02 19:22:12.788765236 +0000 UTC m=+0.151766375 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.063 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.064 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.064 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.065 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.147 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.205 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.207 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.296 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.297 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.362 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.363 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.424 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.432 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.515 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.516 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.579 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.580 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.663 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.667 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:22:14 compute-0 nova_compute[194781]: 2025-10-02 19:22:14.766 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:22:15 compute-0 nova_compute[194781]: 2025-10-02 19:22:15.092 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:22:15 compute-0 nova_compute[194781]: 2025-10-02 19:22:15.093 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5047MB free_disk=72.48859786987305GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:22:15 compute-0 nova_compute[194781]: 2025-10-02 19:22:15.093 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:22:15 compute-0 nova_compute[194781]: 2025-10-02 19:22:15.094 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:22:15 compute-0 nova_compute[194781]: 2025-10-02 19:22:15.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:15 compute-0 nova_compute[194781]: 2025-10-02 19:22:15.178 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:22:15 compute-0 nova_compute[194781]: 2025-10-02 19:22:15.179 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance bf3e67ac-baba-4747-bf94-df866e53bdf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:22:15 compute-0 nova_compute[194781]: 2025-10-02 19:22:15.179 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:22:15 compute-0 nova_compute[194781]: 2025-10-02 19:22:15.180 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:22:15 compute-0 nova_compute[194781]: 2025-10-02 19:22:15.274 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:22:15 compute-0 nova_compute[194781]: 2025-10-02 19:22:15.296 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:22:15 compute-0 nova_compute[194781]: 2025-10-02 19:22:15.299 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:22:15 compute-0 nova_compute[194781]: 2025-10-02 19:22:15.300 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:22:16 compute-0 nova_compute[194781]: 2025-10-02 19:22:16.301 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:22:16 compute-0 nova_compute[194781]: 2025-10-02 19:22:16.302 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:22:16 compute-0 nova_compute[194781]: 2025-10-02 19:22:16.545 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:22:16 compute-0 nova_compute[194781]: 2025-10-02 19:22:16.545 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:22:16 compute-0 nova_compute[194781]: 2025-10-02 19:22:16.546 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:22:16 compute-0 nova_compute[194781]: 2025-10-02 19:22:16.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:18 compute-0 nova_compute[194781]: 2025-10-02 19:22:18.659 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Updating instance_info_cache with network_info: [{"id": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "address": "fa:16:3e:28:95:b6", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdff3ea95-fa", "ovs_interfaceid": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:22:18 compute-0 nova_compute[194781]: 2025-10-02 19:22:18.698 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:22:18 compute-0 nova_compute[194781]: 2025-10-02 19:22:18.698 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:22:18 compute-0 nova_compute[194781]: 2025-10-02 19:22:18.699 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:22:18 compute-0 nova_compute[194781]: 2025-10-02 19:22:18.699 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:22:18 compute-0 nova_compute[194781]: 2025-10-02 19:22:18.699 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:22:18 compute-0 nova_compute[194781]: 2025-10-02 19:22:18.700 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:22:18 compute-0 nova_compute[194781]: 2025-10-02 19:22:18.700 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:22:20 compute-0 nova_compute[194781]: 2025-10-02 19:22:20.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:20 compute-0 nova_compute[194781]: 2025-10-02 19:22:20.428 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:22:20 compute-0 nova_compute[194781]: 2025-10-02 19:22:20.428 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:22:21 compute-0 podman[248217]: 2025-10-02 19:22:21.743399668 +0000 UTC m=+0.118171142 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:22:21 compute-0 nova_compute[194781]: 2025-10-02 19:22:21.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:25 compute-0 nova_compute[194781]: 2025-10-02 19:22:25.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:26 compute-0 nova_compute[194781]: 2025-10-02 19:22:26.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:29 compute-0 podman[248241]: 2025-10-02 19:22:29.714487326 +0000 UTC m=+0.090981093 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 19:22:29 compute-0 podman[248242]: 2025-10-02 19:22:29.737936963 +0000 UTC m=+0.110890924 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:22:29 compute-0 podman[209015]: time="2025-10-02T19:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:22:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:22:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5159 "" "Go-http-client/1.1"
Oct 02 19:22:30 compute-0 nova_compute[194781]: 2025-10-02 19:22:30.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:31 compute-0 openstack_network_exporter[211160]: ERROR   19:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:22:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:22:31 compute-0 openstack_network_exporter[211160]: ERROR   19:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:22:31 compute-0 openstack_network_exporter[211160]: ERROR   19:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:22:31 compute-0 openstack_network_exporter[211160]: ERROR   19:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:22:31 compute-0 openstack_network_exporter[211160]: ERROR   19:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:22:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:22:31 compute-0 nova_compute[194781]: 2025-10-02 19:22:31.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:32 compute-0 podman[248280]: 2025-10-02 19:22:32.705473836 +0000 UTC m=+0.080204700 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, name=ubi9, io.openshift.expose-services=, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64)
Oct 02 19:22:34 compute-0 podman[248300]: 2025-10-02 19:22:34.73640787 +0000 UTC m=+0.105716423 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, version=9.6, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 19:22:34 compute-0 podman[248301]: 2025-10-02 19:22:34.760134605 +0000 UTC m=+0.112034205 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 02 19:22:35 compute-0 nova_compute[194781]: 2025-10-02 19:22:35.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:36 compute-0 nova_compute[194781]: 2025-10-02 19:22:36.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:39 compute-0 podman[248339]: 2025-10-02 19:22:39.714888574 +0000 UTC m=+0.078336430 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid)
Oct 02 19:22:39 compute-0 podman[248338]: 2025-10-02 19:22:39.733753236 +0000 UTC m=+0.087108048 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:22:40 compute-0 nova_compute[194781]: 2025-10-02 19:22:40.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:41 compute-0 nova_compute[194781]: 2025-10-02 19:22:41.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:43 compute-0 podman[248378]: 2025-10-02 19:22:43.719724252 +0000 UTC m=+0.095997860 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:22:43 compute-0 podman[248379]: 2025-10-02 19:22:43.756897972 +0000 UTC m=+0.115249703 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:22:45 compute-0 nova_compute[194781]: 2025-10-02 19:22:45.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:46 compute-0 nova_compute[194781]: 2025-10-02 19:22:46.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:22:47.457 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:22:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:22:47.458 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:22:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:22:47.459 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:22:48 compute-0 unix_chkpwd[248430]: password check failed for user (root)
Oct 02 19:22:48 compute-0 sshd-session[248428]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 02 19:22:50 compute-0 nova_compute[194781]: 2025-10-02 19:22:50.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:50 compute-0 sshd-session[248428]: Failed password for root from 80.94.93.176 port 54840 ssh2
Oct 02 19:22:50 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 19:22:51 compute-0 nova_compute[194781]: 2025-10-02 19:22:51.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:51 compute-0 unix_chkpwd[248433]: password check failed for user (root)
Oct 02 19:22:52 compute-0 podman[248434]: 2025-10-02 19:22:52.775586526 +0000 UTC m=+0.145052843 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:22:53 compute-0 sshd-session[248428]: Failed password for root from 80.94.93.176 port 54840 ssh2
Oct 02 19:22:54 compute-0 unix_chkpwd[248457]: password check failed for user (root)
Oct 02 19:22:55 compute-0 nova_compute[194781]: 2025-10-02 19:22:55.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:56 compute-0 nova_compute[194781]: 2025-10-02 19:22:56.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:22:57 compute-0 sshd-session[248428]: Failed password for root from 80.94.93.176 port 54840 ssh2
Oct 02 19:22:58 compute-0 sshd-session[248428]: Received disconnect from 80.94.93.176 port 54840:11:  [preauth]
Oct 02 19:22:58 compute-0 sshd-session[248428]: Disconnected from authenticating user root 80.94.93.176 port 54840 [preauth]
Oct 02 19:22:58 compute-0 sshd-session[248428]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 02 19:22:58 compute-0 unix_chkpwd[248460]: password check failed for user (root)
Oct 02 19:22:58 compute-0 sshd-session[248458]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 02 19:22:59 compute-0 podman[209015]: time="2025-10-02T19:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:22:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:22:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5169 "" "Go-http-client/1.1"
Oct 02 19:23:00 compute-0 nova_compute[194781]: 2025-10-02 19:23:00.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:00 compute-0 podman[248461]: 2025-10-02 19:23:00.727310866 +0000 UTC m=+0.086888522 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 19:23:00 compute-0 podman[248462]: 2025-10-02 19:23:00.731293974 +0000 UTC m=+0.090862050 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:23:01 compute-0 sshd-session[248458]: Failed password for root from 80.94.93.176 port 55466 ssh2
Oct 02 19:23:01 compute-0 openstack_network_exporter[211160]: ERROR   19:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:23:01 compute-0 openstack_network_exporter[211160]: ERROR   19:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:23:01 compute-0 openstack_network_exporter[211160]: ERROR   19:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:23:01 compute-0 openstack_network_exporter[211160]: ERROR   19:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:23:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:23:01 compute-0 openstack_network_exporter[211160]: ERROR   19:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:23:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:23:01 compute-0 nova_compute[194781]: 2025-10-02 19:23:01.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:02 compute-0 unix_chkpwd[248498]: password check failed for user (root)
Oct 02 19:23:03 compute-0 podman[248499]: 2025-10-02 19:23:03.689560906 +0000 UTC m=+0.070524947 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release=1214.1726694543, version=9.4, io.buildah.version=1.29.0, release-0.7.12=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30)
Oct 02 19:23:04 compute-0 sshd-session[248458]: Failed password for root from 80.94.93.176 port 55466 ssh2
Oct 02 19:23:05 compute-0 nova_compute[194781]: 2025-10-02 19:23:05.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:05 compute-0 unix_chkpwd[248524]: password check failed for user (root)
Oct 02 19:23:05 compute-0 podman[248519]: 2025-10-02 19:23:05.361532266 +0000 UTC m=+0.123442265 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:23:05 compute-0 podman[248518]: 2025-10-02 19:23:05.395036736 +0000 UTC m=+0.154110038 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible)
Oct 02 19:23:06 compute-0 nova_compute[194781]: 2025-10-02 19:23:06.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:07 compute-0 sshd-session[248458]: Failed password for root from 80.94.93.176 port 55466 ssh2
Oct 02 19:23:08 compute-0 sshd-session[248458]: Received disconnect from 80.94.93.176 port 55466:11:  [preauth]
Oct 02 19:23:08 compute-0 sshd-session[248458]: Disconnected from authenticating user root 80.94.93.176 port 55466 [preauth]
Oct 02 19:23:08 compute-0 sshd-session[248458]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 02 19:23:09 compute-0 unix_chkpwd[248559]: password check failed for user (root)
Oct 02 19:23:09 compute-0 sshd-session[248557]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 02 19:23:10 compute-0 nova_compute[194781]: 2025-10-02 19:23:10.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:10 compute-0 podman[248560]: 2025-10-02 19:23:10.736301018 +0000 UTC m=+0.098532608 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:23:10 compute-0 podman[248561]: 2025-10-02 19:23:10.741442148 +0000 UTC m=+0.112336774 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 19:23:11 compute-0 nova_compute[194781]: 2025-10-02 19:23:11.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:11 compute-0 sshd-session[248557]: Failed password for root from 80.94.93.176 port 62624 ssh2
Oct 02 19:23:12 compute-0 unix_chkpwd[248599]: password check failed for user (root)
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.939 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.940 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.944 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.964 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.964 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.965 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.961 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf3e67ac-baba-4747-bf94-df866e53bdf9', 'name': 'vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {'metering.server_group': '1264e536-3255-4eb3-9284-12888e889ce8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.966 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.967 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.972 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.972 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.973 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.973 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.973 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:12.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:23:12.973358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.009 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/cpu volume: 96130000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.043 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 35830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.044 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:23:13.044906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.045 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.046 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.047 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:23:13.047397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.052 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.056 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.057 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.058 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.058 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.bytes volume: 5149 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.058 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2268 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.059 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:23:13.058065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:23:13.060435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.060 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.061 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.062 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.062 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.063 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:23:13.062624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.064 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.065 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.065 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:23:13.065076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.067 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:23:13.068167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.134 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.134 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.134 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.199 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.199 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.200 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.200 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.200 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.200 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.201 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.201 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.bytes.delta volume: 5039 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.201 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:23:13.201043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.202 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.202 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.bytes volume: 4822 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.203 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.203 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.203 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.203 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:23:13.202744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.204 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:23:13.204390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.266 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.266 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.267 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.297 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.297 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.298 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.298 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.299 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.299 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 661561745 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:23:13.299572) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.300 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 116074178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.301 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 93869390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.301 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.301 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.302 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.303 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.303 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.303 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.304 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.304 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.305 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.305 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:23:13.303497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.307 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.307 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:23:13.307728) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.307 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.308 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.308 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.309 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.309 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.309 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.309 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.309 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.310 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.310 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.310 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.311 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.311 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:23:13.309478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.313 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.313 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.313 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.314 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.314 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.314 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.315 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:23:13.312914) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.316 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 1355612991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.317 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 10614908 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.317 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.317 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.318 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.318 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.319 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:23:13.316784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.319 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.319 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.319 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.320 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.320 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.320 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.321 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.321 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.321 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.322 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:23:13.320041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.323 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 241 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.323 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.323 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.323 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.324 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:23:13.322928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.324 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.325 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.325 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.325 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.326 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.326 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.326 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.327 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:23:13.326114) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:23:13.328207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.329 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.330 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.330 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:23:13.330391) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.331 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.332 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.332 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:23:13.331974) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.333 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.334 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.334 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.bytes.delta volume: 4822 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.334 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:23:13.334234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:23:13.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:23:14 compute-0 nova_compute[194781]: 2025-10-02 19:23:14.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:23:14 compute-0 nova_compute[194781]: 2025-10-02 19:23:14.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:23:14 compute-0 sshd-session[248557]: Failed password for root from 80.94.93.176 port 62624 ssh2
Oct 02 19:23:14 compute-0 podman[248601]: 2025-10-02 19:23:14.702829554 +0000 UTC m=+0.081618968 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:23:14 compute-0 podman[248602]: 2025-10-02 19:23:14.786683403 +0000 UTC m=+0.153665206 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 19:23:15 compute-0 nova_compute[194781]: 2025-10-02 19:23:15.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:23:15 compute-0 nova_compute[194781]: 2025-10-02 19:23:15.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:15 compute-0 unix_chkpwd[248642]: password check failed for user (root)
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.076 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.078 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.079 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.080 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.175 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.257 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.260 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.363 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.365 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.443 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.445 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.518 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.524 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.585 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.586 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.677 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.679 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.773 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.775 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:16 compute-0 nova_compute[194781]: 2025-10-02 19:23:16.857 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:17 compute-0 nova_compute[194781]: 2025-10-02 19:23:17.209 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:23:17 compute-0 nova_compute[194781]: 2025-10-02 19:23:17.211 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5034MB free_disk=72.48856735229492GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:23:17 compute-0 nova_compute[194781]: 2025-10-02 19:23:17.211 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:23:17 compute-0 nova_compute[194781]: 2025-10-02 19:23:17.212 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:23:17 compute-0 sshd-session[248557]: Failed password for root from 80.94.93.176 port 62624 ssh2
Oct 02 19:23:17 compute-0 nova_compute[194781]: 2025-10-02 19:23:17.307 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:23:17 compute-0 nova_compute[194781]: 2025-10-02 19:23:17.308 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance bf3e67ac-baba-4747-bf94-df866e53bdf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:23:17 compute-0 nova_compute[194781]: 2025-10-02 19:23:17.309 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:23:17 compute-0 nova_compute[194781]: 2025-10-02 19:23:17.309 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:23:17 compute-0 nova_compute[194781]: 2025-10-02 19:23:17.380 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:23:17 compute-0 nova_compute[194781]: 2025-10-02 19:23:17.396 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:23:17 compute-0 nova_compute[194781]: 2025-10-02 19:23:17.399 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:23:17 compute-0 nova_compute[194781]: 2025-10-02 19:23:17.400 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:23:18 compute-0 nova_compute[194781]: 2025-10-02 19:23:18.401 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:23:18 compute-0 nova_compute[194781]: 2025-10-02 19:23:18.402 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:23:18 compute-0 nova_compute[194781]: 2025-10-02 19:23:18.403 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:23:18 compute-0 sshd-session[248557]: Received disconnect from 80.94.93.176 port 62624:11:  [preauth]
Oct 02 19:23:18 compute-0 sshd-session[248557]: Disconnected from authenticating user root 80.94.93.176 port 62624 [preauth]
Oct 02 19:23:18 compute-0 sshd-session[248557]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 02 19:23:19 compute-0 nova_compute[194781]: 2025-10-02 19:23:19.265 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:23:19 compute-0 nova_compute[194781]: 2025-10-02 19:23:19.266 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:23:19 compute-0 nova_compute[194781]: 2025-10-02 19:23:19.267 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:23:19 compute-0 nova_compute[194781]: 2025-10-02 19:23:19.269 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:23:20 compute-0 nova_compute[194781]: 2025-10-02 19:23:20.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:21 compute-0 nova_compute[194781]: 2025-10-02 19:23:21.299 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:23:21 compute-0 nova_compute[194781]: 2025-10-02 19:23:21.320 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:23:21 compute-0 nova_compute[194781]: 2025-10-02 19:23:21.321 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:23:21 compute-0 nova_compute[194781]: 2025-10-02 19:23:21.322 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:23:21 compute-0 nova_compute[194781]: 2025-10-02 19:23:21.322 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:23:21 compute-0 nova_compute[194781]: 2025-10-02 19:23:21.323 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:23:21 compute-0 nova_compute[194781]: 2025-10-02 19:23:21.323 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:23:21 compute-0 nova_compute[194781]: 2025-10-02 19:23:21.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:21 compute-0 nova_compute[194781]: 2025-10-02 19:23:21.951 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:23:23 compute-0 podman[248668]: 2025-10-02 19:23:23.70026045 +0000 UTC m=+0.071612086 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:23:25 compute-0 nova_compute[194781]: 2025-10-02 19:23:25.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:26 compute-0 nova_compute[194781]: 2025-10-02 19:23:26.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:29 compute-0 podman[209015]: time="2025-10-02T19:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:23:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:23:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5169 "" "Go-http-client/1.1"
Oct 02 19:23:30 compute-0 nova_compute[194781]: 2025-10-02 19:23:30.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:31 compute-0 openstack_network_exporter[211160]: ERROR   19:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:23:31 compute-0 openstack_network_exporter[211160]: ERROR   19:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:23:31 compute-0 openstack_network_exporter[211160]: ERROR   19:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:23:31 compute-0 openstack_network_exporter[211160]: ERROR   19:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:23:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:23:31 compute-0 openstack_network_exporter[211160]: ERROR   19:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:23:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:23:31 compute-0 podman[248693]: 2025-10-02 19:23:31.685438811 +0000 UTC m=+0.059692033 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Oct 02 19:23:31 compute-0 podman[248692]: 2025-10-02 19:23:31.7192452 +0000 UTC m=+0.095211458 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:23:31 compute-0 nova_compute[194781]: 2025-10-02 19:23:31.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:34 compute-0 podman[248727]: 2025-10-02 19:23:34.710830046 +0000 UTC m=+0.084431695 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_id=edpm, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, distribution-scope=public, io.buildah.version=1.29.0)
Oct 02 19:23:35 compute-0 nova_compute[194781]: 2025-10-02 19:23:35.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:35 compute-0 podman[248746]: 2025-10-02 19:23:35.697575378 +0000 UTC m=+0.074006852 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.33.7)
Oct 02 19:23:35 compute-0 podman[248747]: 2025-10-02 19:23:35.734879321 +0000 UTC m=+0.103396460 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct 02 19:23:36 compute-0 nova_compute[194781]: 2025-10-02 19:23:36.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:40 compute-0 nova_compute[194781]: 2025-10-02 19:23:40.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:41.354 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:23:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:41.355 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:23:41 compute-0 nova_compute[194781]: 2025-10-02 19:23:41.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:41 compute-0 podman[248783]: 2025-10-02 19:23:41.691082191 +0000 UTC m=+0.062334895 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:23:41 compute-0 podman[248782]: 2025-10-02 19:23:41.698362309 +0000 UTC m=+0.073220291 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:23:41 compute-0 nova_compute[194781]: 2025-10-02 19:23:41.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:45 compute-0 nova_compute[194781]: 2025-10-02 19:23:45.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:45 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:45.359 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:23:45 compute-0 podman[248825]: 2025-10-02 19:23:45.689356101 +0000 UTC m=+0.067015172 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 19:23:45 compute-0 podman[248826]: 2025-10-02 19:23:45.753257698 +0000 UTC m=+0.125997635 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:23:46 compute-0 nova_compute[194781]: 2025-10-02 19:23:46.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:47.457 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:23:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:47.458 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:23:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:47.459 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.078 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "defe27ca-18ff-45c1-a96c-13a1d0d76474" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.078 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.101 2 DEBUG nova.compute.manager [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.179 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.179 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.189 2 DEBUG nova.virt.hardware [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.190 2 INFO nova.compute.claims [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.413 2 DEBUG nova.compute.provider_tree [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.436 2 DEBUG nova.scheduler.client.report [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.470 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.471 2 DEBUG nova.compute.manager [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.552 2 DEBUG nova.compute.manager [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.552 2 DEBUG nova.network.neutron [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.581 2 INFO nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.639 2 DEBUG nova.compute.manager [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.741 2 DEBUG nova.compute.manager [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.747 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.747 2 INFO nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Creating image(s)
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.748 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "/var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.749 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.749 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.762 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.859 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.862 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.863 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.896 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.989 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:48 compute-0 nova_compute[194781]: 2025-10-02 19:23:48.991 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d,backing_fmt=raw /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.130 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d,backing_fmt=raw /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk 1073741824" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.132 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.133 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.229 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.230 2 DEBUG nova.virt.disk.api [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Checking if we can resize image /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.231 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.292 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.304 2 DEBUG nova.virt.disk.api [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Cannot resize image /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.306 2 DEBUG nova.objects.instance [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lazy-loading 'migration_context' on Instance uuid defe27ca-18ff-45c1-a96c-13a1d0d76474 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.322 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "/var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.323 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.324 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.349 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.405 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.406 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.407 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.418 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.475 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.476 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.569 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 1073741824" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.578 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.579 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.644 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.645 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.646 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Ensure instance console log exists: /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.646 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.647 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:23:49 compute-0 nova_compute[194781]: 2025-10-02 19:23:49.647 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:23:50 compute-0 nova_compute[194781]: 2025-10-02 19:23:50.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:51 compute-0 nova_compute[194781]: 2025-10-02 19:23:51.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:52 compute-0 nova_compute[194781]: 2025-10-02 19:23:52.567 2 DEBUG nova.network.neutron [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Successfully updated port: 47329f1e-0ecb-476e-841d-aff3f14a7fcc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:23:52 compute-0 nova_compute[194781]: 2025-10-02 19:23:52.583 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:23:52 compute-0 nova_compute[194781]: 2025-10-02 19:23:52.584 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquired lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:23:52 compute-0 nova_compute[194781]: 2025-10-02 19:23:52.584 2 DEBUG nova.network.neutron [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:23:52 compute-0 nova_compute[194781]: 2025-10-02 19:23:52.659 2 DEBUG nova.compute.manager [req-a1f067db-2d10-47d0-bcf5-aea45a449d7f req-ebe7cef7-0983-4e40-88b4-977075461e50 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Received event network-changed-47329f1e-0ecb-476e-841d-aff3f14a7fcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:23:52 compute-0 nova_compute[194781]: 2025-10-02 19:23:52.659 2 DEBUG nova.compute.manager [req-a1f067db-2d10-47d0-bcf5-aea45a449d7f req-ebe7cef7-0983-4e40-88b4-977075461e50 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Refreshing instance network info cache due to event network-changed-47329f1e-0ecb-476e-841d-aff3f14a7fcc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:23:52 compute-0 nova_compute[194781]: 2025-10-02 19:23:52.660 2 DEBUG oslo_concurrency.lockutils [req-a1f067db-2d10-47d0-bcf5-aea45a449d7f req-ebe7cef7-0983-4e40-88b4-977075461e50 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:23:53 compute-0 nova_compute[194781]: 2025-10-02 19:23:53.370 2 DEBUG nova.network.neutron [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:23:54 compute-0 podman[248897]: 2025-10-02 19:23:54.673708355 +0000 UTC m=+0.056077023 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.363 2 DEBUG nova.network.neutron [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Updating instance_info_cache with network_info: [{"id": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "address": "fa:16:3e:6d:6b:b2", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47329f1e-0e", "ovs_interfaceid": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.380 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Releasing lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.380 2 DEBUG nova.compute.manager [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Instance network_info: |[{"id": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "address": "fa:16:3e:6d:6b:b2", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47329f1e-0e", "ovs_interfaceid": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.381 2 DEBUG oslo_concurrency.lockutils [req-a1f067db-2d10-47d0-bcf5-aea45a449d7f req-ebe7cef7-0983-4e40-88b4-977075461e50 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.381 2 DEBUG nova.network.neutron [req-a1f067db-2d10-47d0-bcf5-aea45a449d7f req-ebe7cef7-0983-4e40-88b4-977075461e50 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Refreshing network info cache for port 47329f1e-0ecb-476e-841d-aff3f14a7fcc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.385 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Start _get_guest_xml network_info=[{"id": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "address": "fa:16:3e:6d:6b:b2", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47329f1e-0e", "ovs_interfaceid": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:18:19Z,direct_url=<?>,disk_format='qcow2',id=2c6780ee-8ca6-4dab-831c-c89907768547,min_disk=0,min_ram=0,name='cirros',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '2c6780ee-8ca6-4dab-831c-c89907768547'}], 'ephemerals': [{'encrypted': False, 'size': 1, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encryption_options': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.393 2 WARNING nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.402 2 DEBUG nova.virt.libvirt.host [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.404 2 DEBUG nova.virt.libvirt.host [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.409 2 DEBUG nova.virt.libvirt.host [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.410 2 DEBUG nova.virt.libvirt.host [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.411 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.412 2 DEBUG nova.virt.hardware [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:18:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='9b897399-e7fe-4a3e-9cc1-c1f819a27557',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:18:19Z,direct_url=<?>,disk_format='qcow2',id=2c6780ee-8ca6-4dab-831c-c89907768547,min_disk=0,min_ram=0,name='cirros',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.413 2 DEBUG nova.virt.hardware [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.413 2 DEBUG nova.virt.hardware [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.414 2 DEBUG nova.virt.hardware [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.414 2 DEBUG nova.virt.hardware [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.414 2 DEBUG nova.virt.hardware [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.415 2 DEBUG nova.virt.hardware [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.415 2 DEBUG nova.virt.hardware [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.416 2 DEBUG nova.virt.hardware [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.416 2 DEBUG nova.virt.hardware [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.417 2 DEBUG nova.virt.hardware [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.420 2 DEBUG nova.virt.libvirt.vif [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:23:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu',id=3,image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1264e536-3255-4eb3-9284-12888e889ce8'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6bd7784161a4cc3a2e8715feee92228',ramdisk_id='',reservation_id='r-k0vw4q79',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:23:48Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00NzE1MDI0MzQ4MjQyODM4OTI0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQ3MTUwMjQzNDgyNDI4Mzg5MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDcxNTAyNDM0ODI0MjgzODkyND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQ3MTUwMjQzNDgyNDI4Mzg5MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00NzE1MDI0MzQ4MjQyODM4OTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00NzE1MDI0MzQ4MjQyODM4OTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3
Oct 02 19:23:55 compute-0 nova_compute[194781]: Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDcxNTAyNDM0ODI0MjgzODkyND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQ3MTUwMjQzNDgyNDI4Mzg5MjQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00NzE1MDI0MzQ4MjQyODM4OTI0PT0tLQo=',user_id='5e0565a40c4e40f9ab77ce190f9527c5',uuid=defe27ca-18ff-45c1-a96c-13a1d0d76474,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "address": "fa:16:3e:6d:6b:b2", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47329f1e-0e", "ovs_interfaceid": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.421 2 DEBUG nova.network.os_vif_util [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converting VIF {"id": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "address": "fa:16:3e:6d:6b:b2", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47329f1e-0e", "ovs_interfaceid": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.422 2 DEBUG nova.network.os_vif_util [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:6b:b2,bridge_name='br-int',has_traffic_filtering=True,id=47329f1e-0ecb-476e-841d-aff3f14a7fcc,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap47329f1e-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.423 2 DEBUG nova.objects.instance [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lazy-loading 'pci_devices' on Instance uuid defe27ca-18ff-45c1-a96c-13a1d0d76474 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.444 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:23:55 compute-0 nova_compute[194781]:   <uuid>defe27ca-18ff-45c1-a96c-13a1d0d76474</uuid>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   <name>instance-00000003</name>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   <memory>524288</memory>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <nova:name>vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu</nova:name>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:23:55</nova:creationTime>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <nova:flavor name="m1.small">
Oct 02 19:23:55 compute-0 nova_compute[194781]:         <nova:memory>512</nova:memory>
Oct 02 19:23:55 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:23:55 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:23:55 compute-0 nova_compute[194781]:         <nova:ephemeral>1</nova:ephemeral>
Oct 02 19:23:55 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:23:55 compute-0 nova_compute[194781]:         <nova:user uuid="5e0565a40c4e40f9ab77ce190f9527c5">admin</nova:user>
Oct 02 19:23:55 compute-0 nova_compute[194781]:         <nova:project uuid="c6bd7784161a4cc3a2e8715feee92228">admin</nova:project>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="2c6780ee-8ca6-4dab-831c-c89907768547"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:23:55 compute-0 nova_compute[194781]:         <nova:port uuid="47329f1e-0ecb-476e-841d-aff3f14a7fcc">
Oct 02 19:23:55 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="192.168.0.44" ipVersion="4"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <system>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <entry name="serial">defe27ca-18ff-45c1-a96c-13a1d0d76474</entry>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <entry name="uuid">defe27ca-18ff-45c1-a96c-13a1d0d76474</entry>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     </system>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   <os>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   </os>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   <features>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   </features>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <target dev="vdb" bus="virtio"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.config"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:6d:6b:b2"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <target dev="tap47329f1e-0e"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/console.log" append="off"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <video>
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     </video>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:23:55 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:23:55 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:23:55 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:23:55 compute-0 nova_compute[194781]: </domain>
Oct 02 19:23:55 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.453 2 DEBUG nova.compute.manager [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Preparing to wait for external event network-vif-plugged-47329f1e-0ecb-476e-841d-aff3f14a7fcc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.453 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.454 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.454 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.455 2 DEBUG nova.virt.libvirt.vif [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:23:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu',id=3,image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1264e536-3255-4eb3-9284-12888e889ce8'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6bd7784161a4cc3a2e8715feee92228',ramdisk_id='',reservation_id='r-k0vw4q79',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:23:48Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00NzE1MDI0MzQ4MjQyODM4OTI0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQ3MTUwMjQzNDgyNDI4Mzg5MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDcxNTAyNDM0ODI0MjgzODkyND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQ3MTUwMjQzNDgyNDI4Mzg5MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00NzE1MDI0MzQ4MjQyODM4OTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00NzE1MDI0MzQ4MjQyODM4OTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4o
Oct 02 19:23:55 compute-0 nova_compute[194781]: YXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDcxNTAyNDM0ODI0MjgzODkyND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQ3MTUwMjQzNDgyNDI4Mzg5MjQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00NzE1MDI0MzQ4MjQyODM4OTI0PT0tLQo=',user_id='5e0565a40c4e40f9ab77ce190f9527c5',uuid=defe27ca-18ff-45c1-a96c-13a1d0d76474,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "address": "fa:16:3e:6d:6b:b2", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47329f1e-0e", "ovs_interfaceid": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.456 2 DEBUG nova.network.os_vif_util [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converting VIF {"id": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "address": "fa:16:3e:6d:6b:b2", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47329f1e-0e", "ovs_interfaceid": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.457 2 DEBUG nova.network.os_vif_util [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:6b:b2,bridge_name='br-int',has_traffic_filtering=True,id=47329f1e-0ecb-476e-841d-aff3f14a7fcc,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap47329f1e-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.457 2 DEBUG os_vif [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:6b:b2,bridge_name='br-int',has_traffic_filtering=True,id=47329f1e-0ecb-476e-841d-aff3f14a7fcc,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap47329f1e-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47329f1e-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap47329f1e-0e, col_values=(('external_ids', {'iface-id': '47329f1e-0ecb-476e-841d-aff3f14a7fcc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:6b:b2', 'vm-uuid': 'defe27ca-18ff-45c1-a96c-13a1d0d76474'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:23:55 compute-0 NetworkManager[52324]: <info>  [1759433035.4668] manager: (tap47329f1e-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.478 2 INFO os_vif [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:6b:b2,bridge_name='br-int',has_traffic_filtering=True,id=47329f1e-0ecb-476e-841d-aff3f14a7fcc,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap47329f1e-0e')
Oct 02 19:23:55 compute-0 rsyslogd[243731]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:23:55.420 2 DEBUG nova.virt.libvirt.vif [None req-8eab7271-c7b0-4d [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:23:55 compute-0 rsyslogd[243731]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:23:55.455 2 DEBUG nova.virt.libvirt.vif [None req-8eab7271-c7b0-4d [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.546 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.547 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.548 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.549 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No VIF found with MAC fa:16:3e:6d:6b:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:23:55 compute-0 nova_compute[194781]: 2025-10-02 19:23:55.551 2 INFO nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Using config drive
Oct 02 19:23:56 compute-0 nova_compute[194781]: 2025-10-02 19:23:56.369 2 INFO nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Creating config drive at /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.config
Oct 02 19:23:56 compute-0 nova_compute[194781]: 2025-10-02 19:23:56.381 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp41sky7du execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:23:56 compute-0 nova_compute[194781]: 2025-10-02 19:23:56.513 2 DEBUG oslo_concurrency.processutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp41sky7du" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:23:56 compute-0 kernel: tap47329f1e-0e: entered promiscuous mode
Oct 02 19:23:56 compute-0 ovn_controller[97052]: 2025-10-02T19:23:56Z|00040|binding|INFO|Claiming lport 47329f1e-0ecb-476e-841d-aff3f14a7fcc for this chassis.
Oct 02 19:23:56 compute-0 ovn_controller[97052]: 2025-10-02T19:23:56Z|00041|binding|INFO|47329f1e-0ecb-476e-841d-aff3f14a7fcc: Claiming fa:16:3e:6d:6b:b2 192.168.0.44
Oct 02 19:23:56 compute-0 nova_compute[194781]: 2025-10-02 19:23:56.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:56 compute-0 NetworkManager[52324]: <info>  [1759433036.6425] manager: (tap47329f1e-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Oct 02 19:23:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:56.648 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:6b:b2 192.168.0.44'], port_security=['fa:16:3e:6d:6b:b2 192.168.0.44'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-kewzjvdnt5lz-xlxy3mith77z-2ybijppocvxs-port-kfbnvyepymmq', 'neutron:cidrs': '192.168.0.44/24', 'neutron:device_id': 'defe27ca-18ff-45c1-a96c-13a1d0d76474', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5760fda-9195-4e68-8506-4362bf1edf4f', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-kewzjvdnt5lz-xlxy3mith77z-2ybijppocvxs-port-kfbnvyepymmq', 'neutron:project_id': 'c6bd7784161a4cc3a2e8715feee92228', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72aaa87c-2798-4a9c-ab16-34693e3fe341', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.184'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21963977-c089-41a8-8d06-e659a781ceff, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=47329f1e-0ecb-476e-841d-aff3f14a7fcc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:23:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:56.650 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 47329f1e-0ecb-476e-841d-aff3f14a7fcc in datapath b5760fda-9195-4e68-8506-4362bf1edf4f bound to our chassis
Oct 02 19:23:56 compute-0 ovn_controller[97052]: 2025-10-02T19:23:56Z|00042|binding|INFO|Setting lport 47329f1e-0ecb-476e-841d-aff3f14a7fcc ovn-installed in OVS
Oct 02 19:23:56 compute-0 ovn_controller[97052]: 2025-10-02T19:23:56Z|00043|binding|INFO|Setting lport 47329f1e-0ecb-476e-841d-aff3f14a7fcc up in Southbound
Oct 02 19:23:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:56.654 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5760fda-9195-4e68-8506-4362bf1edf4f
Oct 02 19:23:56 compute-0 nova_compute[194781]: 2025-10-02 19:23:56.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:56.672 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[adee033e-066c-4779-a2a8-d41462a37b19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:23:56 compute-0 systemd-machined[154795]: New machine qemu-3-instance-00000003.
Oct 02 19:23:56 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Oct 02 19:23:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:56.712 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[2e02d862-c1ca-4d8a-8cce-badc45d533fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:23:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:56.717 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[10bf9cd0-f1ad-43c4-9aff-c30266d80a1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:23:56 compute-0 systemd-udevd[248947]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:23:56 compute-0 NetworkManager[52324]: <info>  [1759433036.7485] device (tap47329f1e-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:23:56 compute-0 NetworkManager[52324]: <info>  [1759433036.7540] device (tap47329f1e-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:23:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:56.755 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7129bc-d5d7-4e2a-a15d-e9c0a98a4bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:23:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:56.779 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ca7eb9-443c-468d-beaf-31cdf7064e43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5760fda-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:0b:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394420, 'reachable_time': 18189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248955, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:23:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:56.805 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5233e4-14a9-4c67-accd-3d636e64b13a]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapb5760fda-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394432, 'tstamp': 394432}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248958, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5760fda-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394434, 'tstamp': 394434}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248958, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:23:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:56.807 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5760fda-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:23:56 compute-0 nova_compute[194781]: 2025-10-02 19:23:56.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:56 compute-0 nova_compute[194781]: 2025-10-02 19:23:56.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:23:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:56.810 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5760fda-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:23:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:56.811 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:23:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:56.811 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5760fda-90, col_values=(('external_ids', {'iface-id': '8a91c2ef-c369-46ce-8154-e9505f04ef0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:23:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:23:56.812 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:23:56 compute-0 nova_compute[194781]: 2025-10-02 19:23:56.884 2 DEBUG nova.compute.manager [req-bbe915c4-c427-4fcc-b95d-faec5a43f7db req-bde55f08-b4da-4d85-b605-0a0288c7863b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Received event network-vif-plugged-47329f1e-0ecb-476e-841d-aff3f14a7fcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:23:56 compute-0 nova_compute[194781]: 2025-10-02 19:23:56.885 2 DEBUG oslo_concurrency.lockutils [req-bbe915c4-c427-4fcc-b95d-faec5a43f7db req-bde55f08-b4da-4d85-b605-0a0288c7863b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:23:56 compute-0 nova_compute[194781]: 2025-10-02 19:23:56.886 2 DEBUG oslo_concurrency.lockutils [req-bbe915c4-c427-4fcc-b95d-faec5a43f7db req-bde55f08-b4da-4d85-b605-0a0288c7863b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:23:56 compute-0 nova_compute[194781]: 2025-10-02 19:23:56.886 2 DEBUG oslo_concurrency.lockutils [req-bbe915c4-c427-4fcc-b95d-faec5a43f7db req-bde55f08-b4da-4d85-b605-0a0288c7863b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:23:56 compute-0 nova_compute[194781]: 2025-10-02 19:23:56.887 2 DEBUG nova.compute.manager [req-bbe915c4-c427-4fcc-b95d-faec5a43f7db req-bde55f08-b4da-4d85-b605-0a0288c7863b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Processing event network-vif-plugged-47329f1e-0ecb-476e-841d-aff3f14a7fcc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.357 2 DEBUG nova.network.neutron [req-a1f067db-2d10-47d0-bcf5-aea45a449d7f req-ebe7cef7-0983-4e40-88b4-977075461e50 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Updated VIF entry in instance network info cache for port 47329f1e-0ecb-476e-841d-aff3f14a7fcc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.358 2 DEBUG nova.network.neutron [req-a1f067db-2d10-47d0-bcf5-aea45a449d7f req-ebe7cef7-0983-4e40-88b4-977075461e50 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Updating instance_info_cache with network_info: [{"id": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "address": "fa:16:3e:6d:6b:b2", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47329f1e-0e", "ovs_interfaceid": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.372 2 DEBUG oslo_concurrency.lockutils [req-a1f067db-2d10-47d0-bcf5-aea45a449d7f req-ebe7cef7-0983-4e40-88b4-977075461e50 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.813 2 DEBUG nova.compute.manager [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.815 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759433037.8122838, defe27ca-18ff-45c1-a96c-13a1d0d76474 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.816 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] VM Started (Lifecycle Event)
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.824 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.834 2 INFO nova.virt.libvirt.driver [-] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Instance spawned successfully.
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.836 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.843 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.859 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.868 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.870 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.871 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.872 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.873 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.875 2 DEBUG nova.virt.libvirt.driver [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.883 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.884 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759433037.8125944, defe27ca-18ff-45c1-a96c-13a1d0d76474 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.885 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] VM Paused (Lifecycle Event)
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.912 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.920 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759433037.8216243, defe27ca-18ff-45c1-a96c-13a1d0d76474 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.921 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] VM Resumed (Lifecycle Event)
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.958 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.964 2 INFO nova.compute.manager [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Took 9.22 seconds to spawn the instance on the hypervisor.
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.965 2 DEBUG nova.compute.manager [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:23:57 compute-0 nova_compute[194781]: 2025-10-02 19:23:57.967 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:23:58 compute-0 nova_compute[194781]: 2025-10-02 19:23:58.004 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:23:58 compute-0 nova_compute[194781]: 2025-10-02 19:23:58.031 2 INFO nova.compute.manager [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Took 9.88 seconds to build instance.
Oct 02 19:23:58 compute-0 nova_compute[194781]: 2025-10-02 19:23:58.049 2 DEBUG oslo_concurrency.lockutils [None req-8eab7271-c7b0-4da1-b817-157b4ba1db22 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:23:58 compute-0 nova_compute[194781]: 2025-10-02 19:23:58.996 2 DEBUG nova.compute.manager [req-3c79d0db-425f-40a2-b60c-421d78023a82 req-eef09d86-8a5b-407e-a6c8-8f30bf2733a5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Received event network-vif-plugged-47329f1e-0ecb-476e-841d-aff3f14a7fcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:23:58 compute-0 nova_compute[194781]: 2025-10-02 19:23:58.998 2 DEBUG oslo_concurrency.lockutils [req-3c79d0db-425f-40a2-b60c-421d78023a82 req-eef09d86-8a5b-407e-a6c8-8f30bf2733a5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:23:59 compute-0 nova_compute[194781]: 2025-10-02 19:23:58.999 2 DEBUG oslo_concurrency.lockutils [req-3c79d0db-425f-40a2-b60c-421d78023a82 req-eef09d86-8a5b-407e-a6c8-8f30bf2733a5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:23:59 compute-0 nova_compute[194781]: 2025-10-02 19:23:59.001 2 DEBUG oslo_concurrency.lockutils [req-3c79d0db-425f-40a2-b60c-421d78023a82 req-eef09d86-8a5b-407e-a6c8-8f30bf2733a5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:23:59 compute-0 nova_compute[194781]: 2025-10-02 19:23:59.002 2 DEBUG nova.compute.manager [req-3c79d0db-425f-40a2-b60c-421d78023a82 req-eef09d86-8a5b-407e-a6c8-8f30bf2733a5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] No waiting events found dispatching network-vif-plugged-47329f1e-0ecb-476e-841d-aff3f14a7fcc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:23:59 compute-0 nova_compute[194781]: 2025-10-02 19:23:59.003 2 WARNING nova.compute.manager [req-3c79d0db-425f-40a2-b60c-421d78023a82 req-eef09d86-8a5b-407e-a6c8-8f30bf2733a5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Received unexpected event network-vif-plugged-47329f1e-0ecb-476e-841d-aff3f14a7fcc for instance with vm_state active and task_state None.
Oct 02 19:23:59 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 19:23:59 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 19:23:59 compute-0 podman[209015]: time="2025-10-02T19:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:23:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:23:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5167 "" "Go-http-client/1.1"
Oct 02 19:24:00 compute-0 nova_compute[194781]: 2025-10-02 19:24:00.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:00 compute-0 nova_compute[194781]: 2025-10-02 19:24:00.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:01 compute-0 openstack_network_exporter[211160]: ERROR   19:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:24:01 compute-0 openstack_network_exporter[211160]: ERROR   19:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:24:01 compute-0 openstack_network_exporter[211160]: ERROR   19:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:24:01 compute-0 openstack_network_exporter[211160]: ERROR   19:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:24:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:24:01 compute-0 openstack_network_exporter[211160]: ERROR   19:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:24:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:24:02 compute-0 podman[248984]: 2025-10-02 19:24:02.715945417 +0000 UTC m=+0.083598055 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4)
Oct 02 19:24:02 compute-0 podman[248983]: 2025-10-02 19:24:02.73309433 +0000 UTC m=+0.101053126 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 19:24:05 compute-0 nova_compute[194781]: 2025-10-02 19:24:05.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:05 compute-0 nova_compute[194781]: 2025-10-02 19:24:05.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:05 compute-0 podman[249017]: 2025-10-02 19:24:05.697967741 +0000 UTC m=+0.075251360 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-type=git, com.redhat.component=ubi9-container, name=ubi9, container_name=kepler, io.buildah.version=1.29.0)
Oct 02 19:24:05 compute-0 podman[249036]: 2025-10-02 19:24:05.807556926 +0000 UTC m=+0.072793214 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, version=9.6, config_id=edpm, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Oct 02 19:24:05 compute-0 podman[249057]: 2025-10-02 19:24:05.908992831 +0000 UTC m=+0.067766348 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:24:10 compute-0 nova_compute[194781]: 2025-10-02 19:24:10.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:10 compute-0 nova_compute[194781]: 2025-10-02 19:24:10.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:12 compute-0 podman[249077]: 2025-10-02 19:24:12.702691749 +0000 UTC m=+0.066892225 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:24:12 compute-0 podman[249078]: 2025-10-02 19:24:12.717713284 +0000 UTC m=+0.077328706 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:24:14 compute-0 nova_compute[194781]: 2025-10-02 19:24:14.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:24:15 compute-0 nova_compute[194781]: 2025-10-02 19:24:15.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:24:15 compute-0 nova_compute[194781]: 2025-10-02 19:24:15.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:24:15 compute-0 nova_compute[194781]: 2025-10-02 19:24:15.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:15 compute-0 nova_compute[194781]: 2025-10-02 19:24:15.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:16 compute-0 podman[249114]: 2025-10-02 19:24:16.691992232 +0000 UTC m=+0.064448219 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:24:16 compute-0 podman[249115]: 2025-10-02 19:24:16.733975044 +0000 UTC m=+0.102224677 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.062 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.063 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.063 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.063 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.168 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.229 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.231 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.291 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.292 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.371 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.373 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.438 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.450 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.509 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.511 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.569 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.571 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.647 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.650 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.723 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.730 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.824 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.825 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.882 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.884 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.945 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:24:17 compute-0 nova_compute[194781]: 2025-10-02 19:24:17.946 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:24:18 compute-0 nova_compute[194781]: 2025-10-02 19:24:18.001 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:24:18 compute-0 nova_compute[194781]: 2025-10-02 19:24:18.401 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:24:18 compute-0 nova_compute[194781]: 2025-10-02 19:24:18.402 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4922MB free_disk=72.4875602722168GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:24:18 compute-0 nova_compute[194781]: 2025-10-02 19:24:18.403 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:24:18 compute-0 nova_compute[194781]: 2025-10-02 19:24:18.403 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:24:18 compute-0 nova_compute[194781]: 2025-10-02 19:24:18.506 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:24:18 compute-0 nova_compute[194781]: 2025-10-02 19:24:18.507 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance bf3e67ac-baba-4747-bf94-df866e53bdf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:24:18 compute-0 nova_compute[194781]: 2025-10-02 19:24:18.507 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance defe27ca-18ff-45c1-a96c-13a1d0d76474 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:24:18 compute-0 nova_compute[194781]: 2025-10-02 19:24:18.507 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:24:18 compute-0 nova_compute[194781]: 2025-10-02 19:24:18.508 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:24:18 compute-0 nova_compute[194781]: 2025-10-02 19:24:18.601 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:24:18 compute-0 nova_compute[194781]: 2025-10-02 19:24:18.618 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:24:18 compute-0 nova_compute[194781]: 2025-10-02 19:24:18.638 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:24:18 compute-0 nova_compute[194781]: 2025-10-02 19:24:18.638 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:24:19 compute-0 nova_compute[194781]: 2025-10-02 19:24:19.639 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:24:19 compute-0 nova_compute[194781]: 2025-10-02 19:24:19.640 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:24:20 compute-0 nova_compute[194781]: 2025-10-02 19:24:20.030 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:24:20 compute-0 nova_compute[194781]: 2025-10-02 19:24:20.031 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:24:20 compute-0 nova_compute[194781]: 2025-10-02 19:24:20.057 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:24:20 compute-0 nova_compute[194781]: 2025-10-02 19:24:20.058 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:24:20 compute-0 nova_compute[194781]: 2025-10-02 19:24:20.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:20 compute-0 nova_compute[194781]: 2025-10-02 19:24:20.326 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:24:20 compute-0 nova_compute[194781]: 2025-10-02 19:24:20.327 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:24:20 compute-0 nova_compute[194781]: 2025-10-02 19:24:20.327 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:24:20 compute-0 nova_compute[194781]: 2025-10-02 19:24:20.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:21 compute-0 nova_compute[194781]: 2025-10-02 19:24:21.269 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Updating instance_info_cache with network_info: [{"id": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "address": "fa:16:3e:28:95:b6", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdff3ea95-fa", "ovs_interfaceid": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:24:21 compute-0 nova_compute[194781]: 2025-10-02 19:24:21.285 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:24:21 compute-0 nova_compute[194781]: 2025-10-02 19:24:21.285 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:24:25 compute-0 nova_compute[194781]: 2025-10-02 19:24:25.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:25 compute-0 nova_compute[194781]: 2025-10-02 19:24:25.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:25 compute-0 podman[249195]: 2025-10-02 19:24:25.761089132 +0000 UTC m=+0.136207894 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:24:26 compute-0 ovn_controller[97052]: 2025-10-02T19:24:26Z|00044|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Oct 02 19:24:29 compute-0 podman[209015]: time="2025-10-02T19:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:24:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:24:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5178 "" "Go-http-client/1.1"
Oct 02 19:24:30 compute-0 nova_compute[194781]: 2025-10-02 19:24:30.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:30 compute-0 nova_compute[194781]: 2025-10-02 19:24:30.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:30 compute-0 ovn_controller[97052]: 2025-10-02T19:24:30Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6d:6b:b2 192.168.0.44
Oct 02 19:24:30 compute-0 ovn_controller[97052]: 2025-10-02T19:24:30Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:6b:b2 192.168.0.44
Oct 02 19:24:31 compute-0 openstack_network_exporter[211160]: ERROR   19:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:24:31 compute-0 openstack_network_exporter[211160]: ERROR   19:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:24:31 compute-0 openstack_network_exporter[211160]: ERROR   19:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:24:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:24:31 compute-0 openstack_network_exporter[211160]: ERROR   19:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:24:31 compute-0 openstack_network_exporter[211160]: ERROR   19:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:24:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:24:33 compute-0 podman[249234]: 2025-10-02 19:24:33.69382354 +0000 UTC m=+0.070160452 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true)
Oct 02 19:24:33 compute-0 podman[249233]: 2025-10-02 19:24:33.695512646 +0000 UTC m=+0.075378924 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:24:35 compute-0 nova_compute[194781]: 2025-10-02 19:24:35.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:35 compute-0 nova_compute[194781]: 2025-10-02 19:24:35.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:36 compute-0 podman[249271]: 2025-10-02 19:24:36.726351597 +0000 UTC m=+0.093710248 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Oct 02 19:24:36 compute-0 podman[249270]: 2025-10-02 19:24:36.738526275 +0000 UTC m=+0.097413637 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, name=ubi9, release=1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=)
Oct 02 19:24:36 compute-0 podman[249269]: 2025-10-02 19:24:36.759553112 +0000 UTC m=+0.121903848 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, version=9.6, managed_by=edpm_ansible)
Oct 02 19:24:40 compute-0 nova_compute[194781]: 2025-10-02 19:24:40.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:40 compute-0 nova_compute[194781]: 2025-10-02 19:24:40.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:43 compute-0 podman[249329]: 2025-10-02 19:24:43.737063116 +0000 UTC m=+0.104818947 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 02 19:24:43 compute-0 podman[249328]: 2025-10-02 19:24:43.742900513 +0000 UTC m=+0.103249005 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:24:45 compute-0 nova_compute[194781]: 2025-10-02 19:24:45.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:45 compute-0 nova_compute[194781]: 2025-10-02 19:24:45.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:24:47.458 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:24:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:24:47.459 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:24:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:24:47.459 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:24:47 compute-0 podman[249371]: 2025-10-02 19:24:47.724727834 +0000 UTC m=+0.096714218 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 19:24:47 compute-0 podman[249372]: 2025-10-02 19:24:47.800730654 +0000 UTC m=+0.160200741 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Oct 02 19:24:50 compute-0 nova_compute[194781]: 2025-10-02 19:24:50.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:50 compute-0 nova_compute[194781]: 2025-10-02 19:24:50.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:55 compute-0 nova_compute[194781]: 2025-10-02 19:24:55.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:55 compute-0 nova_compute[194781]: 2025-10-02 19:24:55.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:24:56 compute-0 podman[249414]: 2025-10-02 19:24:56.743034325 +0000 UTC m=+0.117470429 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:24:59 compute-0 podman[209015]: time="2025-10-02T19:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:24:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:24:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5186 "" "Go-http-client/1.1"
Oct 02 19:25:00 compute-0 nova_compute[194781]: 2025-10-02 19:25:00.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:00 compute-0 nova_compute[194781]: 2025-10-02 19:25:00.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:01 compute-0 openstack_network_exporter[211160]: ERROR   19:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:25:01 compute-0 openstack_network_exporter[211160]: ERROR   19:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:25:01 compute-0 openstack_network_exporter[211160]: ERROR   19:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:25:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:25:01 compute-0 openstack_network_exporter[211160]: ERROR   19:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:25:01 compute-0 openstack_network_exporter[211160]: ERROR   19:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:25:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:25:04 compute-0 podman[249439]: 2025-10-02 19:25:04.742168695 +0000 UTC m=+0.111271521 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Oct 02 19:25:04 compute-0 podman[249440]: 2025-10-02 19:25:04.745817133 +0000 UTC m=+0.113691596 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:25:05 compute-0 nova_compute[194781]: 2025-10-02 19:25:05.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:05 compute-0 nova_compute[194781]: 2025-10-02 19:25:05.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:07 compute-0 podman[249475]: 2025-10-02 19:25:07.766945911 +0000 UTC m=+0.122933495 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Oct 02 19:25:07 compute-0 podman[249477]: 2025-10-02 19:25:07.767507416 +0000 UTC m=+0.107106368 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 19:25:07 compute-0 podman[249476]: 2025-10-02 19:25:07.798397399 +0000 UTC m=+0.150492848 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release=1214.1726694543, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, version=9.4, io.openshift.expose-services=, name=ubi9, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:25:10 compute-0 nova_compute[194781]: 2025-10-02 19:25:10.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:10 compute-0 nova_compute[194781]: 2025-10-02 19:25:10.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.940 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.940 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.941 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.948 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance defe27ca-18ff-45c1-a96c-13a1d0d76474 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 19:25:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:12.950 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/defe27ca-18ff-45c1-a96c-13a1d0d76474 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}7d00fd7b3129404772d7b3eeaef94222e4d12fdb730378deac028178d031ce80" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.518 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Thu, 02 Oct 2025 19:25:12 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-9927abe4-9ce8-4a4b-9363-f557c51cd309 x-openstack-request-id: req-9927abe4-9ce8-4a4b-9363-f557c51cd309 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.519 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "defe27ca-18ff-45c1-a96c-13a1d0d76474", "name": "vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu", "status": "ACTIVE", "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "user_id": "5e0565a40c4e40f9ab77ce190f9527c5", "metadata": {"metering.server_group": "1264e536-3255-4eb3-9284-12888e889ce8"}, "hostId": "536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2", "image": {"id": "2c6780ee-8ca6-4dab-831c-c89907768547", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/2c6780ee-8ca6-4dab-831c-c89907768547"}]}, "flavor": {"id": "9b897399-e7fe-4a3e-9cc1-c1f819a27557", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/9b897399-e7fe-4a3e-9cc1-c1f819a27557"}]}, "created": "2025-10-02T19:23:46Z", "updated": "2025-10-02T19:23:58Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.44", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:6d:6b:b2"}, {"version": 4, "addr": "192.168.122.184", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:6d:6b:b2"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/defe27ca-18ff-45c1-a96c-13a1d0d76474"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/defe27ca-18ff-45c1-a96c-13a1d0d76474"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-02T19:23:57.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.519 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/defe27ca-18ff-45c1-a96c-13a1d0d76474 used request id req-9927abe4-9ce8-4a4b-9363-f557c51cd309 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.520 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'defe27ca-18ff-45c1-a96c-13a1d0d76474', 'name': 'vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {'metering.server_group': '1264e536-3255-4eb3-9284-12888e889ce8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.523 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf3e67ac-baba-4747-bf94-df866e53bdf9', 'name': 'vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {'metering.server_group': '1264e536-3255-4eb3-9284-12888e889ce8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.528 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.528 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.528 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.529 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.529 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:25:13.529281) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.564 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/cpu volume: 33040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.600 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/cpu volume: 216230000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.622 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 37360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.623 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.623 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.623 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.624 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.624 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.625 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.625 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.626 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:25:13.623536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.627 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:25:13.625880) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.630 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for defe27ca-18ff-45c1-a96c-13a1d0d76474 / tap47329f1e-0e inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.630 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.634 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets volume: 35 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.638 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.639 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.639 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.640 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.bytes volume: 1786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.640 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.bytes volume: 5233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.640 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:25:13.639916) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.641 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.642 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.642 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.642 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.642 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.642 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:25:13.642538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.643 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.643 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.644 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.645 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:25:13.645022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.645 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.646 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.646 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.647 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.647 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.648 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets volume: 43 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.648 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.649 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:25:13.647515) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.649 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.650 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu>]
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-02T19:25:13.649618) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.650 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.650 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.650 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.651 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.651 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:25:13.651057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.711 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.712 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.712 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.767 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.767 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.767 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.821 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.822 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.822 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.824 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.824 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.824 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.824 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.824 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.825 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.825 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.826 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.827 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:25:13.825115) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.827 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.829 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.829 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.830 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:25:13.829761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.830 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.bytes volume: 4892 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.831 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.832 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.832 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.833 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.833 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.833 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.834 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:25:13.833389) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.863 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.863 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.864 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.897 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.897 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.898 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.922 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.922 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.922 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.923 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.923 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.923 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.924 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.924 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.924 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.924 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.latency volume: 864994696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.924 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.latency volume: 104660889 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.925 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.latency volume: 104208362 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.925 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:25:13.924253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.925 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 661561745 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.926 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 116074178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.926 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 93869390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.926 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.927 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.927 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.928 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.928 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.928 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.928 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.928 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.928 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.928 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.928 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu>]
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.929 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.929 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.929 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.929 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.930 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.930 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.930 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-02T19:25:13.928690) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.930 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.930 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.931 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.931 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.931 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.931 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.932 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.932 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.932 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.932 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.932 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.932 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.933 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.933 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.933 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.933 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.934 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.934 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.934 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.934 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.934 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.934 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.934 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.935 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.935 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.935 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.935 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.936 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.936 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.936 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.937 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.937 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.937 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:25:13.929696) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.937 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:25:13.932957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.937 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.938 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:25:13.934450) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.938 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.938 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.938 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.938 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.938 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.939 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:25:13.938055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.939 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.939 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.939 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.939 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.940 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.940 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.940 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.940 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.941 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.941 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.941 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.latency volume: 2502666553 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.941 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.latency volume: 10231196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.941 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.942 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 1355612991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.942 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 10614908 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.942 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.942 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.942 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.943 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.943 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.944 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.944 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.944 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.944 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.944 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.944 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.944 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.945 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.945 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:25:13.941327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.946 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.946 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:25:13.944292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.946 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.946 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.946 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.947 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.947 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.947 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.947 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.947 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.948 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.948 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.948 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.948 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 241 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.949 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:25:13.947850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.949 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.949 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.949 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.949 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.950 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.950 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.950 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.951 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.951 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.951 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.951 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.951 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.951 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.952 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.952 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.952 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.952 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.952 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.953 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.953 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.953 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.953 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:25:13.951205) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.953 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.953 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.953 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:25:13.952603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.954 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.954 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.955 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:25:13.953920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.954 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.955 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.955 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.955 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.955 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.956 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.956 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.956 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.956 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.956 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.957 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.957 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:25:13.955476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:25:13.956989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.957 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.957 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.958 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.958 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.959 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:25:13.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:25:14 compute-0 podman[249530]: 2025-10-02 19:25:14.708891536 +0000 UTC m=+0.068134018 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:25:14 compute-0 podman[249531]: 2025-10-02 19:25:14.740290413 +0000 UTC m=+0.099641188 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 02 19:25:15 compute-0 nova_compute[194781]: 2025-10-02 19:25:15.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:15 compute-0 nova_compute[194781]: 2025-10-02 19:25:15.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:16 compute-0 nova_compute[194781]: 2025-10-02 19:25:16.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:25:16 compute-0 nova_compute[194781]: 2025-10-02 19:25:16.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:25:17 compute-0 nova_compute[194781]: 2025-10-02 19:25:17.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:25:18 compute-0 podman[249572]: 2025-10-02 19:25:18.763319245 +0000 UTC m=+0.122652448 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:25:18 compute-0 podman[249573]: 2025-10-02 19:25:18.80504306 +0000 UTC m=+0.159097030 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.036 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.036 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.064 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.064 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.064 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.065 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.165 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.246 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.247 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.343 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.351 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.450 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.452 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.510 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.518 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.602 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.603 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.663 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.665 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.727 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.728 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.793 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.800 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.882 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.885 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.955 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:19 compute-0 nova_compute[194781]: 2025-10-02 19:25:19.956 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.019 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.020 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.076 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.416 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.418 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4848MB free_disk=72.4660873413086GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.418 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.419 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.599 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.599 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance bf3e67ac-baba-4747-bf94-df866e53bdf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.600 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance defe27ca-18ff-45c1-a96c-13a1d0d76474 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.600 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.600 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.683 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing inventories for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.772 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating ProviderTree inventory for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.772 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.793 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing aggregate associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.816 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing trait associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,HW_CPU_X86_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.901 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.920 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.921 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.922 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.922 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.922 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 19:25:20 compute-0 nova_compute[194781]: 2025-10-02 19:25:20.939 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 19:25:21 compute-0 nova_compute[194781]: 2025-10-02 19:25:21.934 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:25:22 compute-0 nova_compute[194781]: 2025-10-02 19:25:22.032 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:25:22 compute-0 nova_compute[194781]: 2025-10-02 19:25:22.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:25:22 compute-0 nova_compute[194781]: 2025-10-02 19:25:22.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:25:22 compute-0 nova_compute[194781]: 2025-10-02 19:25:22.357 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:25:22 compute-0 nova_compute[194781]: 2025-10-02 19:25:22.358 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:25:22 compute-0 nova_compute[194781]: 2025-10-02 19:25:22.359 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:25:22 compute-0 nova_compute[194781]: 2025-10-02 19:25:22.359 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:25:24 compute-0 nova_compute[194781]: 2025-10-02 19:25:24.402 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:25:24 compute-0 nova_compute[194781]: 2025-10-02 19:25:24.423 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:25:24 compute-0 nova_compute[194781]: 2025-10-02 19:25:24.424 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:25:24 compute-0 nova_compute[194781]: 2025-10-02 19:25:24.425 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:25:24 compute-0 nova_compute[194781]: 2025-10-02 19:25:24.426 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 19:25:24 compute-0 nova_compute[194781]: 2025-10-02 19:25:24.441 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:25:25 compute-0 nova_compute[194781]: 2025-10-02 19:25:25.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:25 compute-0 nova_compute[194781]: 2025-10-02 19:25:25.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:27 compute-0 podman[249655]: 2025-10-02 19:25:27.718377579 +0000 UTC m=+0.087381567 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:25:29 compute-0 podman[209015]: time="2025-10-02T19:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:25:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:25:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5176 "" "Go-http-client/1.1"
Oct 02 19:25:30 compute-0 nova_compute[194781]: 2025-10-02 19:25:30.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:30 compute-0 nova_compute[194781]: 2025-10-02 19:25:30.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:31 compute-0 openstack_network_exporter[211160]: ERROR   19:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:25:31 compute-0 openstack_network_exporter[211160]: ERROR   19:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:25:31 compute-0 openstack_network_exporter[211160]: ERROR   19:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:25:31 compute-0 openstack_network_exporter[211160]: ERROR   19:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:25:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:25:31 compute-0 openstack_network_exporter[211160]: ERROR   19:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:25:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:25:35 compute-0 nova_compute[194781]: 2025-10-02 19:25:35.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:35 compute-0 nova_compute[194781]: 2025-10-02 19:25:35.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:35 compute-0 podman[249678]: 2025-10-02 19:25:35.739742679 +0000 UTC m=+0.100649915 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Oct 02 19:25:35 compute-0 podman[249677]: 2025-10-02 19:25:35.76094577 +0000 UTC m=+0.125578797 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:25:36 compute-0 sshd-session[249654]: banner exchange: Connection from 77.28.34.104 port 48276: invalid format
Oct 02 19:25:38 compute-0 podman[249721]: 2025-10-02 19:25:38.722539033 +0000 UTC m=+0.081624951 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 19:25:38 compute-0 podman[249716]: 2025-10-02 19:25:38.733747296 +0000 UTC m=+0.090245435 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, version=9.4, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, release-0.7.12=, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9)
Oct 02 19:25:38 compute-0 podman[249715]: 2025-10-02 19:25:38.735670587 +0000 UTC m=+0.108258930 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9)
Oct 02 19:25:40 compute-0 nova_compute[194781]: 2025-10-02 19:25:40.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:40 compute-0 nova_compute[194781]: 2025-10-02 19:25:40.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:40 compute-0 nova_compute[194781]: 2025-10-02 19:25:40.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:40.859 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:25:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:40.861 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:25:44 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:44.864 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:25:45 compute-0 nova_compute[194781]: 2025-10-02 19:25:45.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:45 compute-0 nova_compute[194781]: 2025-10-02 19:25:45.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:45 compute-0 podman[249776]: 2025-10-02 19:25:45.704736464 +0000 UTC m=+0.081854549 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:25:45 compute-0 podman[249777]: 2025-10-02 19:25:45.706629105 +0000 UTC m=+0.078198340 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.191 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.191 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.206 2 DEBUG nova.compute.manager [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.297 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.298 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.307 2 DEBUG nova.virt.hardware [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.308 2 INFO nova.compute.claims [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:25:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:47.459 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:25:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:47.459 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:25:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:47.460 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.469 2 DEBUG nova.compute.provider_tree [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.486 2 DEBUG nova.scheduler.client.report [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.514 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.515 2 DEBUG nova.compute.manager [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.574 2 DEBUG nova.compute.manager [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.575 2 DEBUG nova.network.neutron [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.597 2 INFO nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.643 2 DEBUG nova.compute.manager [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.722 2 DEBUG nova.compute.manager [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.724 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.725 2 INFO nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Creating image(s)
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.726 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "/var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.726 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.727 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.746 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.810 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.812 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.814 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.831 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.887 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.888 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d,backing_fmt=raw /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.930 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d,backing_fmt=raw /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.931 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "e2414b9b934482058b2047ac6d18f7f90fd5db4d" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:25:47 compute-0 nova_compute[194781]: 2025-10-02 19:25:47.931 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.026 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.027 2 DEBUG nova.virt.disk.api [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Checking if we can resize image /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.028 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.087 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.088 2 DEBUG nova.virt.disk.api [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Cannot resize image /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.088 2 DEBUG nova.objects.instance [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lazy-loading 'migration_context' on Instance uuid 1399f3a8-2c63-4b73-b015-f96a55b3d59f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.102 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "/var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.103 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.104 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.122 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.180 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.181 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.182 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.199 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.257 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.258 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.304 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.305 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.306 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.384 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.386 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.386 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Ensure instance console log exists: /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.387 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.388 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:25:48 compute-0 nova_compute[194781]: 2025-10-02 19:25:48.388 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:25:49 compute-0 podman[249843]: 2025-10-02 19:25:49.71676969 +0000 UTC m=+0.086971996 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:25:49 compute-0 podman[249844]: 2025-10-02 19:25:49.782868782 +0000 UTC m=+0.152298147 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:25:50 compute-0 nova_compute[194781]: 2025-10-02 19:25:50.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:50 compute-0 nova_compute[194781]: 2025-10-02 19:25:50.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:51 compute-0 nova_compute[194781]: 2025-10-02 19:25:51.524 2 DEBUG nova.network.neutron [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Successfully updated port: 1d3b6f60-e6d6-492b-9cc3-b2355b1866fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:25:51 compute-0 nova_compute[194781]: 2025-10-02 19:25:51.539 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "refresh_cache-1399f3a8-2c63-4b73-b015-f96a55b3d59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:25:51 compute-0 nova_compute[194781]: 2025-10-02 19:25:51.540 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquired lock "refresh_cache-1399f3a8-2c63-4b73-b015-f96a55b3d59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:25:51 compute-0 nova_compute[194781]: 2025-10-02 19:25:51.540 2 DEBUG nova.network.neutron [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:25:51 compute-0 nova_compute[194781]: 2025-10-02 19:25:51.655 2 DEBUG nova.compute.manager [req-6f8f46b7-febe-4ea1-90ca-cc0c30306bbc req-0d2b44eb-f274-469b-853b-c949390a5ba3 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Received event network-changed-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:25:51 compute-0 nova_compute[194781]: 2025-10-02 19:25:51.656 2 DEBUG nova.compute.manager [req-6f8f46b7-febe-4ea1-90ca-cc0c30306bbc req-0d2b44eb-f274-469b-853b-c949390a5ba3 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Refreshing instance network info cache due to event network-changed-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:25:51 compute-0 nova_compute[194781]: 2025-10-02 19:25:51.656 2 DEBUG oslo_concurrency.lockutils [req-6f8f46b7-febe-4ea1-90ca-cc0c30306bbc req-0d2b44eb-f274-469b-853b-c949390a5ba3 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-1399f3a8-2c63-4b73-b015-f96a55b3d59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:25:52 compute-0 nova_compute[194781]: 2025-10-02 19:25:52.371 2 DEBUG nova.network.neutron [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.413 2 DEBUG nova.network.neutron [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Updating instance_info_cache with network_info: [{"id": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "address": "fa:16:3e:b4:e2:ba", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d3b6f60-e6", "ovs_interfaceid": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.445 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Releasing lock "refresh_cache-1399f3a8-2c63-4b73-b015-f96a55b3d59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.446 2 DEBUG nova.compute.manager [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Instance network_info: |[{"id": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "address": "fa:16:3e:b4:e2:ba", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d3b6f60-e6", "ovs_interfaceid": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.446 2 DEBUG oslo_concurrency.lockutils [req-6f8f46b7-febe-4ea1-90ca-cc0c30306bbc req-0d2b44eb-f274-469b-853b-c949390a5ba3 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-1399f3a8-2c63-4b73-b015-f96a55b3d59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.447 2 DEBUG nova.network.neutron [req-6f8f46b7-febe-4ea1-90ca-cc0c30306bbc req-0d2b44eb-f274-469b-853b-c949390a5ba3 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Refreshing network info cache for port 1d3b6f60-e6d6-492b-9cc3-b2355b1866fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.450 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Start _get_guest_xml network_info=[{"id": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "address": "fa:16:3e:b4:e2:ba", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d3b6f60-e6", "ovs_interfaceid": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:18:19Z,direct_url=<?>,disk_format='qcow2',id=2c6780ee-8ca6-4dab-831c-c89907768547,min_disk=0,min_ram=0,name='cirros',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '2c6780ee-8ca6-4dab-831c-c89907768547'}], 'ephemerals': [{'encrypted': False, 'size': 1, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encryption_options': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.458 2 WARNING nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.465 2 DEBUG nova.virt.libvirt.host [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.467 2 DEBUG nova.virt.libvirt.host [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.474 2 DEBUG nova.virt.libvirt.host [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.474 2 DEBUG nova.virt.libvirt.host [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.475 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.475 2 DEBUG nova.virt.hardware [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:18:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='9b897399-e7fe-4a3e-9cc1-c1f819a27557',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:18:19Z,direct_url=<?>,disk_format='qcow2',id=2c6780ee-8ca6-4dab-831c-c89907768547,min_disk=0,min_ram=0,name='cirros',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.475 2 DEBUG nova.virt.hardware [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.476 2 DEBUG nova.virt.hardware [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.476 2 DEBUG nova.virt.hardware [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.476 2 DEBUG nova.virt.hardware [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.476 2 DEBUG nova.virt.hardware [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.476 2 DEBUG nova.virt.hardware [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.477 2 DEBUG nova.virt.hardware [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.477 2 DEBUG nova.virt.hardware [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.477 2 DEBUG nova.virt.hardware [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.477 2 DEBUG nova.virt.hardware [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.481 2 DEBUG nova.virt.libvirt.vif [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp',id=4,image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1264e536-3255-4eb3-9284-12888e889ce8'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6bd7784161a4cc3a2e8715feee92228',ramdisk_id='',reservation_id='r-2h7q92tb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:25:47Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03NzA2MzUxNDUxMDUyNjE0NTg0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc3MDYzNTE0NTEwNTI2MTQ1ODQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzcwNjM1MTQ1MTA1MjYxNDU4ND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc3MDYzNTE0NTEwNTI2MTQ1ODQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03NzA2MzUxNDUxMDUyNjE0NTg0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03NzA2MzUxNDUxMDUyNjE0NTg0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3
Oct 02 19:25:54 compute-0 nova_compute[194781]: Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzcwNjM1MTQ1MTA1MjYxNDU4ND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc3MDYzNTE0NTEwNTI2MTQ1ODQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03NzA2MzUxNDUxMDUyNjE0NTg0PT0tLQo=',user_id='5e0565a40c4e40f9ab77ce190f9527c5',uuid=1399f3a8-2c63-4b73-b015-f96a55b3d59f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "address": "fa:16:3e:b4:e2:ba", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d3b6f60-e6", "ovs_interfaceid": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.481 2 DEBUG nova.network.os_vif_util [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converting VIF {"id": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "address": "fa:16:3e:b4:e2:ba", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d3b6f60-e6", "ovs_interfaceid": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.481 2 DEBUG nova.network.os_vif_util [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:e2:ba,bridge_name='br-int',has_traffic_filtering=True,id=1d3b6f60-e6d6-492b-9cc3-b2355b1866fd,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d3b6f60-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.482 2 DEBUG nova.objects.instance [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1399f3a8-2c63-4b73-b015-f96a55b3d59f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.504 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:25:54 compute-0 nova_compute[194781]:   <uuid>1399f3a8-2c63-4b73-b015-f96a55b3d59f</uuid>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   <name>instance-00000004</name>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   <memory>524288</memory>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <nova:name>vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp</nova:name>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:25:54</nova:creationTime>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <nova:flavor name="m1.small">
Oct 02 19:25:54 compute-0 nova_compute[194781]:         <nova:memory>512</nova:memory>
Oct 02 19:25:54 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:25:54 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:25:54 compute-0 nova_compute[194781]:         <nova:ephemeral>1</nova:ephemeral>
Oct 02 19:25:54 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:25:54 compute-0 nova_compute[194781]:         <nova:user uuid="5e0565a40c4e40f9ab77ce190f9527c5">admin</nova:user>
Oct 02 19:25:54 compute-0 nova_compute[194781]:         <nova:project uuid="c6bd7784161a4cc3a2e8715feee92228">admin</nova:project>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="2c6780ee-8ca6-4dab-831c-c89907768547"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:25:54 compute-0 nova_compute[194781]:         <nova:port uuid="1d3b6f60-e6d6-492b-9cc3-b2355b1866fd">
Oct 02 19:25:54 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="192.168.0.10" ipVersion="4"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <system>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <entry name="serial">1399f3a8-2c63-4b73-b015-f96a55b3d59f</entry>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <entry name="uuid">1399f3a8-2c63-4b73-b015-f96a55b3d59f</entry>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     </system>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   <os>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   </os>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   <features>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   </features>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <target dev="vdb" bus="virtio"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.config"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:b4:e2:ba"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <target dev="tap1d3b6f60-e6"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/console.log" append="off"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <video>
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     </video>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:25:54 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:25:54 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:25:54 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:25:54 compute-0 nova_compute[194781]: </domain>
Oct 02 19:25:54 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.505 2 DEBUG nova.compute.manager [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Preparing to wait for external event network-vif-plugged-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.505 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.505 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.505 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.506 2 DEBUG nova.virt.libvirt.vif [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp',id=4,image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1264e536-3255-4eb3-9284-12888e889ce8'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6bd7784161a4cc3a2e8715feee92228',ramdisk_id='',reservation_id='r-2h7q92tb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:25:47Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03NzA2MzUxNDUxMDUyNjE0NTg0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc3MDYzNTE0NTEwNTI2MTQ1ODQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzcwNjM1MTQ1MTA1MjYxNDU4ND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc3MDYzNTE0NTEwNTI2MTQ1ODQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03NzA2MzUxNDUxMDUyNjE0NTg0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03NzA2MzUxNDUxMDUyNjE0NTg0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4o
Oct 02 19:25:54 compute-0 nova_compute[194781]: YXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzcwNjM1MTQ1MTA1MjYxNDU4ND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc3MDYzNTE0NTEwNTI2MTQ1ODQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03NzA2MzUxNDUxMDUyNjE0NTg0PT0tLQo=',user_id='5e0565a40c4e40f9ab77ce190f9527c5',uuid=1399f3a8-2c63-4b73-b015-f96a55b3d59f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "address": "fa:16:3e:b4:e2:ba", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d3b6f60-e6", "ovs_interfaceid": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.507 2 DEBUG nova.network.os_vif_util [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converting VIF {"id": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "address": "fa:16:3e:b4:e2:ba", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d3b6f60-e6", "ovs_interfaceid": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.507 2 DEBUG nova.network.os_vif_util [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:e2:ba,bridge_name='br-int',has_traffic_filtering=True,id=1d3b6f60-e6d6-492b-9cc3-b2355b1866fd,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d3b6f60-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.508 2 DEBUG os_vif [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:e2:ba,bridge_name='br-int',has_traffic_filtering=True,id=1d3b6f60-e6d6-492b-9cc3-b2355b1866fd,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d3b6f60-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.509 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.509 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d3b6f60-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d3b6f60-e6, col_values=(('external_ids', {'iface-id': '1d3b6f60-e6d6-492b-9cc3-b2355b1866fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:e2:ba', 'vm-uuid': '1399f3a8-2c63-4b73-b015-f96a55b3d59f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:54 compute-0 NetworkManager[52324]: <info>  [1759433154.5181] manager: (tap1d3b6f60-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.527 2 INFO os_vif [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:e2:ba,bridge_name='br-int',has_traffic_filtering=True,id=1d3b6f60-e6d6-492b-9cc3-b2355b1866fd,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d3b6f60-e6')
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.581 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.581 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.581 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.581 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No VIF found with MAC fa:16:3e:b4:e2:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:25:54 compute-0 nova_compute[194781]: 2025-10-02 19:25:54.582 2 INFO nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Using config drive
Oct 02 19:25:54 compute-0 rsyslogd[243731]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:25:54.481 2 DEBUG nova.virt.libvirt.vif [None req-536ae216-207b-45 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:25:54 compute-0 rsyslogd[243731]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:25:54.506 2 DEBUG nova.virt.libvirt.vif [None req-536ae216-207b-45 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:25:55 compute-0 nova_compute[194781]: 2025-10-02 19:25:55.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:55 compute-0 nova_compute[194781]: 2025-10-02 19:25:55.629 2 INFO nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Creating config drive at /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.config
Oct 02 19:25:55 compute-0 nova_compute[194781]: 2025-10-02 19:25:55.637 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf7umbq2v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:25:55 compute-0 nova_compute[194781]: 2025-10-02 19:25:55.781 2 DEBUG oslo_concurrency.processutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf7umbq2v" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:25:55 compute-0 kernel: tap1d3b6f60-e6: entered promiscuous mode
Oct 02 19:25:55 compute-0 ovn_controller[97052]: 2025-10-02T19:25:55Z|00045|binding|INFO|Claiming lport 1d3b6f60-e6d6-492b-9cc3-b2355b1866fd for this chassis.
Oct 02 19:25:55 compute-0 ovn_controller[97052]: 2025-10-02T19:25:55Z|00046|binding|INFO|1d3b6f60-e6d6-492b-9cc3-b2355b1866fd: Claiming fa:16:3e:b4:e2:ba 192.168.0.10
Oct 02 19:25:55 compute-0 NetworkManager[52324]: <info>  [1759433155.8992] manager: (tap1d3b6f60-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Oct 02 19:25:55 compute-0 nova_compute[194781]: 2025-10-02 19:25:55.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:55 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:55.907 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:e2:ba 192.168.0.10'], port_security=['fa:16:3e:b4:e2:ba 192.168.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-kewzjvdnt5lz-vt5t5337qak7-rqvxszkro6gs-port-fzv7g3jzrfep', 'neutron:cidrs': '192.168.0.10/24', 'neutron:device_id': '1399f3a8-2c63-4b73-b015-f96a55b3d59f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5760fda-9195-4e68-8506-4362bf1edf4f', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-kewzjvdnt5lz-vt5t5337qak7-rqvxszkro6gs-port-fzv7g3jzrfep', 'neutron:project_id': 'c6bd7784161a4cc3a2e8715feee92228', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72aaa87c-2798-4a9c-ab16-34693e3fe341', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21963977-c089-41a8-8d06-e659a781ceff, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=1d3b6f60-e6d6-492b-9cc3-b2355b1866fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:25:55 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:55.909 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 1d3b6f60-e6d6-492b-9cc3-b2355b1866fd in datapath b5760fda-9195-4e68-8506-4362bf1edf4f bound to our chassis
Oct 02 19:25:55 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:55.910 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5760fda-9195-4e68-8506-4362bf1edf4f
Oct 02 19:25:55 compute-0 ovn_controller[97052]: 2025-10-02T19:25:55Z|00047|binding|INFO|Setting lport 1d3b6f60-e6d6-492b-9cc3-b2355b1866fd ovn-installed in OVS
Oct 02 19:25:55 compute-0 ovn_controller[97052]: 2025-10-02T19:25:55Z|00048|binding|INFO|Setting lport 1d3b6f60-e6d6-492b-9cc3-b2355b1866fd up in Southbound
Oct 02 19:25:55 compute-0 nova_compute[194781]: 2025-10-02 19:25:55.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:55 compute-0 nova_compute[194781]: 2025-10-02 19:25:55.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:55 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:55.933 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[42e34a73-f328-4fbf-abbe-161f1544b704]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:25:55 compute-0 systemd-udevd[249917]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:25:55 compute-0 systemd-machined[154795]: New machine qemu-4-instance-00000004.
Oct 02 19:25:55 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:55.964 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec6b42a-ad53-4129-83e5-0cf5a73d5a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:25:55 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:55.968 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[98d43f7c-8d8b-4781-a05c-7e6af8308b68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:25:55 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Oct 02 19:25:55 compute-0 NetworkManager[52324]: <info>  [1759433155.9798] device (tap1d3b6f60-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:25:55 compute-0 NetworkManager[52324]: <info>  [1759433155.9807] device (tap1d3b6f60-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:25:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:56.001 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[35926832-51dd-40e2-813f-61ae125acdd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:25:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:56.021 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[e88f94f5-82a6-434c-9821-0a3b6cdf482e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5760fda-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:0b:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394420, 'reachable_time': 18189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249922, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:25:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:56.039 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[1dbc5399-ab8c-447a-81cc-4ce5371deda3]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapb5760fda-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394432, 'tstamp': 394432}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249928, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5760fda-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394434, 'tstamp': 394434}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249928, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:25:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:56.041 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5760fda-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:25:56 compute-0 nova_compute[194781]: 2025-10-02 19:25:56.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:56 compute-0 nova_compute[194781]: 2025-10-02 19:25:56.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:56.044 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5760fda-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:25:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:56.044 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:25:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:56.045 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5760fda-90, col_values=(('external_ids', {'iface-id': '8a91c2ef-c369-46ce-8154-e9505f04ef0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:25:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:25:56.045 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:25:56 compute-0 nova_compute[194781]: 2025-10-02 19:25:56.621 2 DEBUG nova.compute.manager [req-d26542f9-e49d-42f6-9900-942f0265b096 req-3eef8daa-766a-48ff-85b4-f91288ff0d2a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Received event network-vif-plugged-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:25:56 compute-0 nova_compute[194781]: 2025-10-02 19:25:56.622 2 DEBUG oslo_concurrency.lockutils [req-d26542f9-e49d-42f6-9900-942f0265b096 req-3eef8daa-766a-48ff-85b4-f91288ff0d2a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:25:56 compute-0 nova_compute[194781]: 2025-10-02 19:25:56.622 2 DEBUG oslo_concurrency.lockutils [req-d26542f9-e49d-42f6-9900-942f0265b096 req-3eef8daa-766a-48ff-85b4-f91288ff0d2a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:25:56 compute-0 nova_compute[194781]: 2025-10-02 19:25:56.623 2 DEBUG oslo_concurrency.lockutils [req-d26542f9-e49d-42f6-9900-942f0265b096 req-3eef8daa-766a-48ff-85b4-f91288ff0d2a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:25:56 compute-0 nova_compute[194781]: 2025-10-02 19:25:56.623 2 DEBUG nova.compute.manager [req-d26542f9-e49d-42f6-9900-942f0265b096 req-3eef8daa-766a-48ff-85b4-f91288ff0d2a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Processing event network-vif-plugged-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:25:57 compute-0 sshd-session[249889]: Invalid user wqmarlduiqkmgs from 77.28.34.104 port 60434
Oct 02 19:25:57 compute-0 sshd-session[249889]: fatal: userauth_pubkey: parse publickey packet: incomplete message [preauth]
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.158 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759433157.158005, 1399f3a8-2c63-4b73-b015-f96a55b3d59f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.159 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] VM Started (Lifecycle Event)
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.161 2 DEBUG nova.compute.manager [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.168 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.173 2 INFO nova.virt.libvirt.driver [-] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Instance spawned successfully.
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.174 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.184 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.189 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.205 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.206 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.207 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.207 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.208 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.209 2 DEBUG nova.virt.libvirt.driver [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.212 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.212 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759433157.1581647, 1399f3a8-2c63-4b73-b015-f96a55b3d59f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.213 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] VM Paused (Lifecycle Event)
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.307 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.313 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759433157.1662688, 1399f3a8-2c63-4b73-b015-f96a55b3d59f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.314 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] VM Resumed (Lifecycle Event)
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.332 2 INFO nova.compute.manager [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Took 9.61 seconds to spawn the instance on the hypervisor.
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.333 2 DEBUG nova.compute.manager [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.369 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.374 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.385 2 DEBUG nova.network.neutron [req-6f8f46b7-febe-4ea1-90ca-cc0c30306bbc req-0d2b44eb-f274-469b-853b-c949390a5ba3 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Updated VIF entry in instance network info cache for port 1d3b6f60-e6d6-492b-9cc3-b2355b1866fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.385 2 DEBUG nova.network.neutron [req-6f8f46b7-febe-4ea1-90ca-cc0c30306bbc req-0d2b44eb-f274-469b-853b-c949390a5ba3 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Updating instance_info_cache with network_info: [{"id": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "address": "fa:16:3e:b4:e2:ba", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d3b6f60-e6", "ovs_interfaceid": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.409 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.434 2 DEBUG oslo_concurrency.lockutils [req-6f8f46b7-febe-4ea1-90ca-cc0c30306bbc req-0d2b44eb-f274-469b-853b-c949390a5ba3 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-1399f3a8-2c63-4b73-b015-f96a55b3d59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.436 2 INFO nova.compute.manager [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Took 10.17 seconds to build instance.
Oct 02 19:25:57 compute-0 nova_compute[194781]: 2025-10-02 19:25:57.458 2 DEBUG oslo_concurrency.lockutils [None req-536ae216-207b-45c5-9875-84bb64a2a0a4 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:25:58 compute-0 podman[249937]: 2025-10-02 19:25:58.710908788 +0000 UTC m=+0.082280410 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:25:58 compute-0 nova_compute[194781]: 2025-10-02 19:25:58.732 2 DEBUG nova.compute.manager [req-ceceddfe-77d2-4a99-933a-f3e0246dcce5 req-965d3658-e053-425f-a1e4-18ece4d01df8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Received event network-vif-plugged-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:25:58 compute-0 nova_compute[194781]: 2025-10-02 19:25:58.733 2 DEBUG oslo_concurrency.lockutils [req-ceceddfe-77d2-4a99-933a-f3e0246dcce5 req-965d3658-e053-425f-a1e4-18ece4d01df8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:25:58 compute-0 nova_compute[194781]: 2025-10-02 19:25:58.733 2 DEBUG oslo_concurrency.lockutils [req-ceceddfe-77d2-4a99-933a-f3e0246dcce5 req-965d3658-e053-425f-a1e4-18ece4d01df8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:25:58 compute-0 nova_compute[194781]: 2025-10-02 19:25:58.733 2 DEBUG oslo_concurrency.lockutils [req-ceceddfe-77d2-4a99-933a-f3e0246dcce5 req-965d3658-e053-425f-a1e4-18ece4d01df8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:25:58 compute-0 nova_compute[194781]: 2025-10-02 19:25:58.734 2 DEBUG nova.compute.manager [req-ceceddfe-77d2-4a99-933a-f3e0246dcce5 req-965d3658-e053-425f-a1e4-18ece4d01df8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] No waiting events found dispatching network-vif-plugged-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:25:58 compute-0 nova_compute[194781]: 2025-10-02 19:25:58.734 2 WARNING nova.compute.manager [req-ceceddfe-77d2-4a99-933a-f3e0246dcce5 req-965d3658-e053-425f-a1e4-18ece4d01df8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Received unexpected event network-vif-plugged-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd for instance with vm_state active and task_state None.
Oct 02 19:25:59 compute-0 nova_compute[194781]: 2025-10-02 19:25:59.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:25:59 compute-0 podman[209015]: time="2025-10-02T19:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:25:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:25:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5193 "" "Go-http-client/1.1"
Oct 02 19:26:00 compute-0 nova_compute[194781]: 2025-10-02 19:26:00.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:01 compute-0 openstack_network_exporter[211160]: ERROR   19:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:26:01 compute-0 openstack_network_exporter[211160]: ERROR   19:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:26:01 compute-0 openstack_network_exporter[211160]: ERROR   19:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:26:01 compute-0 openstack_network_exporter[211160]: ERROR   19:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:26:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:26:01 compute-0 openstack_network_exporter[211160]: ERROR   19:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:26:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:26:04 compute-0 nova_compute[194781]: 2025-10-02 19:26:04.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:05 compute-0 nova_compute[194781]: 2025-10-02 19:26:05.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:06 compute-0 podman[249960]: 2025-10-02 19:26:06.720971049 +0000 UTC m=+0.095815491 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 19:26:06 compute-0 podman[249961]: 2025-10-02 19:26:06.747286572 +0000 UTC m=+0.118611578 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:26:09 compute-0 nova_compute[194781]: 2025-10-02 19:26:09.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:09 compute-0 podman[249997]: 2025-10-02 19:26:09.711644854 +0000 UTC m=+0.081441099 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:26:09 compute-0 podman[249996]: 2025-10-02 19:26:09.731777096 +0000 UTC m=+0.092748106 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, version=9.4, release=1214.1726694543)
Oct 02 19:26:09 compute-0 podman[249995]: 2025-10-02 19:26:09.767097379 +0000 UTC m=+0.133446280 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:26:10 compute-0 nova_compute[194781]: 2025-10-02 19:26:10.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:14 compute-0 nova_compute[194781]: 2025-10-02 19:26:14.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:15 compute-0 nova_compute[194781]: 2025-10-02 19:26:15.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:16 compute-0 nova_compute[194781]: 2025-10-02 19:26:16.070 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:26:16 compute-0 podman[250054]: 2025-10-02 19:26:16.679939405 +0000 UTC m=+0.059001522 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:26:16 compute-0 podman[250055]: 2025-10-02 19:26:16.719519512 +0000 UTC m=+0.093236058 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:26:17 compute-0 nova_compute[194781]: 2025-10-02 19:26:17.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:26:18 compute-0 nova_compute[194781]: 2025-10-02 19:26:18.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:26:19 compute-0 nova_compute[194781]: 2025-10-02 19:26:19.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:26:19 compute-0 nova_compute[194781]: 2025-10-02 19:26:19.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:20 compute-0 nova_compute[194781]: 2025-10-02 19:26:20.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:26:20 compute-0 nova_compute[194781]: 2025-10-02 19:26:20.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:20 compute-0 podman[250097]: 2025-10-02 19:26:20.728460702 +0000 UTC m=+0.102978275 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 19:26:20 compute-0 podman[250098]: 2025-10-02 19:26:20.7771048 +0000 UTC m=+0.147161604 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.070 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.071 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.072 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.074 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.186 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.261 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.262 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.334 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.336 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.398 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.400 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.467 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.476 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.535 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.537 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.610 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.611 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.671 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.673 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.733 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.743 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.800 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.803 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.864 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.866 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.927 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:21 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.929 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:21.997 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.025 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.091 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.092 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.162 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.164 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.227 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.228 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.288 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.660 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.662 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4699MB free_disk=72.46505355834961GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.662 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.663 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.782 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.783 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance bf3e67ac-baba-4747-bf94-df866e53bdf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.783 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance defe27ca-18ff-45c1-a96c-13a1d0d76474 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.783 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 1399f3a8-2c63-4b73-b015-f96a55b3d59f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.783 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.783 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.870 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.883 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.904 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:26:22 compute-0 nova_compute[194781]: 2025-10-02 19:26:22.905 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:26:23 compute-0 nova_compute[194781]: 2025-10-02 19:26:23.902 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:26:23 compute-0 nova_compute[194781]: 2025-10-02 19:26:23.902 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:26:23 compute-0 nova_compute[194781]: 2025-10-02 19:26:23.903 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:26:24 compute-0 nova_compute[194781]: 2025-10-02 19:26:24.386 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:26:24 compute-0 nova_compute[194781]: 2025-10-02 19:26:24.386 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:26:24 compute-0 nova_compute[194781]: 2025-10-02 19:26:24.387 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:26:24 compute-0 nova_compute[194781]: 2025-10-02 19:26:24.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:25 compute-0 nova_compute[194781]: 2025-10-02 19:26:25.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:25 compute-0 nova_compute[194781]: 2025-10-02 19:26:25.908 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Updating instance_info_cache with network_info: [{"id": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "address": "fa:16:3e:28:95:b6", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdff3ea95-fa", "ovs_interfaceid": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:26:25 compute-0 nova_compute[194781]: 2025-10-02 19:26:25.926 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:26:25 compute-0 nova_compute[194781]: 2025-10-02 19:26:25.927 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:26:25 compute-0 ovn_controller[97052]: 2025-10-02T19:26:25Z|00049|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Oct 02 19:26:28 compute-0 ovn_controller[97052]: 2025-10-02T19:26:28Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b4:e2:ba 192.168.0.10
Oct 02 19:26:28 compute-0 ovn_controller[97052]: 2025-10-02T19:26:28Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b4:e2:ba 192.168.0.10
Oct 02 19:26:29 compute-0 nova_compute[194781]: 2025-10-02 19:26:29.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:29 compute-0 podman[250206]: 2025-10-02 19:26:29.70845213 +0000 UTC m=+0.081733627 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:26:29 compute-0 podman[209015]: time="2025-10-02T19:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:26:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:26:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5195 "" "Go-http-client/1.1"
Oct 02 19:26:30 compute-0 nova_compute[194781]: 2025-10-02 19:26:30.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:31 compute-0 openstack_network_exporter[211160]: ERROR   19:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:26:31 compute-0 openstack_network_exporter[211160]: ERROR   19:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:26:31 compute-0 openstack_network_exporter[211160]: ERROR   19:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:26:31 compute-0 openstack_network_exporter[211160]: ERROR   19:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:26:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:26:31 compute-0 openstack_network_exporter[211160]: ERROR   19:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:26:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:26:34 compute-0 nova_compute[194781]: 2025-10-02 19:26:34.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:35 compute-0 nova_compute[194781]: 2025-10-02 19:26:35.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:37 compute-0 podman[250230]: 2025-10-02 19:26:37.748166005 +0000 UTC m=+0.104124204 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:26:37 compute-0 podman[250231]: 2025-10-02 19:26:37.778408764 +0000 UTC m=+0.129901774 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:26:39 compute-0 nova_compute[194781]: 2025-10-02 19:26:39.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:40 compute-0 nova_compute[194781]: 2025-10-02 19:26:40.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:40 compute-0 podman[250268]: 2025-10-02 19:26:40.731763707 +0000 UTC m=+0.100847314 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal)
Oct 02 19:26:40 compute-0 podman[250269]: 2025-10-02 19:26:40.742929349 +0000 UTC m=+0.108362287 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, name=ubi9, config_id=edpm, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct 02 19:26:40 compute-0 podman[250270]: 2025-10-02 19:26:40.763542563 +0000 UTC m=+0.125117517 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:26:44 compute-0 nova_compute[194781]: 2025-10-02 19:26:44.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:45 compute-0 nova_compute[194781]: 2025-10-02 19:26:45.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:26:47.461 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:26:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:26:47.463 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:26:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:26:47.465 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:26:47 compute-0 podman[250325]: 2025-10-02 19:26:47.713390236 +0000 UTC m=+0.080473796 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:26:47 compute-0 podman[250326]: 2025-10-02 19:26:47.727036759 +0000 UTC m=+0.086522484 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:26:49 compute-0 nova_compute[194781]: 2025-10-02 19:26:49.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:50 compute-0 nova_compute[194781]: 2025-10-02 19:26:50.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:51 compute-0 podman[250372]: 2025-10-02 19:26:51.733399386 +0000 UTC m=+0.106863511 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:26:51 compute-0 podman[250373]: 2025-10-02 19:26:51.75116352 +0000 UTC m=+0.114209950 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:26:54 compute-0 nova_compute[194781]: 2025-10-02 19:26:54.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:55 compute-0 nova_compute[194781]: 2025-10-02 19:26:55.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:59 compute-0 nova_compute[194781]: 2025-10-02 19:26:59.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:26:59 compute-0 podman[209015]: time="2025-10-02T19:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:26:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:26:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5194 "" "Go-http-client/1.1"
Oct 02 19:27:00 compute-0 nova_compute[194781]: 2025-10-02 19:27:00.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:00 compute-0 podman[250418]: 2025-10-02 19:27:00.739068458 +0000 UTC m=+0.111273098 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:27:01 compute-0 openstack_network_exporter[211160]: ERROR   19:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:27:01 compute-0 openstack_network_exporter[211160]: ERROR   19:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:27:01 compute-0 openstack_network_exporter[211160]: ERROR   19:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:27:01 compute-0 openstack_network_exporter[211160]: ERROR   19:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:27:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:27:01 compute-0 openstack_network_exporter[211160]: ERROR   19:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:27:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:27:04 compute-0 nova_compute[194781]: 2025-10-02 19:27:04.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:05 compute-0 nova_compute[194781]: 2025-10-02 19:27:05.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:08 compute-0 podman[250442]: 2025-10-02 19:27:08.7966173 +0000 UTC m=+0.145704129 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:27:08 compute-0 podman[250443]: 2025-10-02 19:27:08.821882987 +0000 UTC m=+0.162409447 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:27:09 compute-0 nova_compute[194781]: 2025-10-02 19:27:09.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:10 compute-0 nova_compute[194781]: 2025-10-02 19:27:10.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:11 compute-0 podman[250481]: 2025-10-02 19:27:11.734551307 +0000 UTC m=+0.104748579 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., distribution-scope=public, release=1755695350, version=9.6)
Oct 02 19:27:11 compute-0 podman[250482]: 2025-10-02 19:27:11.756855041 +0000 UTC m=+0.111701089 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, io.buildah.version=1.29.0)
Oct 02 19:27:11 compute-0 podman[250483]: 2025-10-02 19:27:11.764414396 +0000 UTC m=+0.125909286 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.941 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.941 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.941 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.954 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'defe27ca-18ff-45c1-a96c-13a1d0d76474', 'name': 'vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {'metering.server_group': '1264e536-3255-4eb3-9284-12888e889ce8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.959 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf3e67ac-baba-4747-bf94-df866e53bdf9', 'name': 'vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {'metering.server_group': '1264e536-3255-4eb3-9284-12888e889ce8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.963 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.967 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1399f3a8-2c63-4b73-b015-f96a55b3d59f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 19:27:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:12.968 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1399f3a8-2c63-4b73-b015-f96a55b3d59f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}7d00fd7b3129404772d7b3eeaef94222e4d12fdb730378deac028178d031ce80" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.604 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Thu, 02 Oct 2025 19:27:12 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-e01b08f8-216d-45b3-bb39-4014a73e3f97 x-openstack-request-id: req-e01b08f8-216d-45b3-bb39-4014a73e3f97 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.605 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1399f3a8-2c63-4b73-b015-f96a55b3d59f", "name": "vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp", "status": "ACTIVE", "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "user_id": "5e0565a40c4e40f9ab77ce190f9527c5", "metadata": {"metering.server_group": "1264e536-3255-4eb3-9284-12888e889ce8"}, "hostId": "536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2", "image": {"id": "2c6780ee-8ca6-4dab-831c-c89907768547", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/2c6780ee-8ca6-4dab-831c-c89907768547"}]}, "flavor": {"id": "9b897399-e7fe-4a3e-9cc1-c1f819a27557", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/9b897399-e7fe-4a3e-9cc1-c1f819a27557"}]}, "created": "2025-10-02T19:25:45Z", "updated": "2025-10-02T19:25:57Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.10", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b4:e2:ba"}, {"version": 4, "addr": "192.168.122.198", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b4:e2:ba"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1399f3a8-2c63-4b73-b015-f96a55b3d59f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1399f3a8-2c63-4b73-b015-f96a55b3d59f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-02T19:25:57.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.605 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1399f3a8-2c63-4b73-b015-f96a55b3d59f used request id req-e01b08f8-216d-45b3-bb39-4014a73e3f97 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.607 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1399f3a8-2c63-4b73-b015-f96a55b3d59f', 'name': 'vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {'metering.server_group': '1264e536-3255-4eb3-9284-12888e889ce8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.608 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.608 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.608 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.609 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:27:13.608812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.647 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/cpu volume: 34600000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.678 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/cpu volume: 250970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.711 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 38950000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.748 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/cpu volume: 31680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.750 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.750 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.750 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.750 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.751 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.751 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.751 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.752 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/memory.usage volume: 49.00390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.752 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.753 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.754 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:27:13.751382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.754 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.755 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.755 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.755 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.756 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.756 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:27:13.755919) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.763 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.769 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets volume: 56 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.774 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.778 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1399f3a8-2c63-4b73-b015-f96a55b3d59f / tap1d3b6f60-e6 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.778 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.779 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.779 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.779 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.780 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.780 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.780 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.bytes volume: 1870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.780 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.bytes volume: 8664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.781 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:27:13.780274) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.781 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2436 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.781 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.bytes volume: 1786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.782 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.782 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.782 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.782 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.782 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.782 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.783 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.783 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.783 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.784 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.784 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.785 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:27:13.782728) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.785 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.786 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.786 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.786 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.787 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.788 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.788 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.789 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.790 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.790 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.791 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.791 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.791 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.792 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets volume: 67 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.793 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.793 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.793 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:27:13.786140) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.793 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:27:13.791353) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.794 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.794 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.795 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.795 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.795 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.795 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.795 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-02T19:27:13.795407) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.795 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp>]
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.796 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.796 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.797 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.797 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.797 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:27:13.797327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.868 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.869 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.870 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.931 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.931 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.932 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.986 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.987 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:13.988 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.041 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.042 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.042 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.043 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.043 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.043 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.044 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:27:14.043650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.044 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.044 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.045 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.045 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.045 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.046 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:27:14.046073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.046 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.046 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.046 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.bytes volume: 7634 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.047 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.047 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.047 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.048 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:27:14.048479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.074 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.074 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.075 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.102 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.103 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.104 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.127 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.128 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.129 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.156 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.157 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.157 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.157 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.158 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.158 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.158 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.158 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.latency volume: 864994696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.158 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.latency volume: 104660889 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.158 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.latency volume: 104208362 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.159 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 661561745 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.159 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 116074178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.159 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 93869390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.159 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:27:14.158387) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.160 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.160 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.160 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.latency volume: 670468778 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.160 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.latency volume: 113543433 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.161 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.latency volume: 206559376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.161 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.162 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.162 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp>]
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.162 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.162 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.162 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.162 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-02T19:27:14.162000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.163 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:27:14.162868) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.163 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.163 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.163 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.164 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.164 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.164 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.164 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.164 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.165 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.165 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.165 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.165 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.165 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.166 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.166 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.166 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.166 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.166 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.166 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:27:14.166267) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.166 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.167 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.167 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.167 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.167 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.167 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.167 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.168 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:27:14.167930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.168 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.168 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.168 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.169 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.169 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.169 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.169 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.169 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.170 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.170 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.170 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.170 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.171 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.171 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.171 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.171 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.171 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.171 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.171 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:27:14.171364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.171 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.172 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.172 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.172 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.172 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.172 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.173 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.173 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.173 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.173 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.173 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.174 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.174 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.174 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.174 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.175 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.175 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.latency volume: 2502666553 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.175 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.latency volume: 10231196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.175 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.175 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 1359285126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.176 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 10614908 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.176 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.176 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.176 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:27:14.174995) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.177 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.177 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.latency volume: 3305858753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.177 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.latency volume: 11917091 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.177 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.178 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.178 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.178 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.178 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.178 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:27:14.178523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.178 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.179 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.179 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.179 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.179 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.179 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.180 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.180 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.180 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.180 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.181 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.181 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.182 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.182 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:27:14.181938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.182 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.182 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.182 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.183 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.183 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.183 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.183 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.184 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.184 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.184 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.185 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.185 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.185 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.185 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.185 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.186 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.186 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:27:14.185409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.186 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.186 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.187 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.187 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.187 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.188 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.189 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:27:14.187252) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:27:14.188546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.189 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.189 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.189 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.189 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.190 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:27:14.189836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.190 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.190 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.191 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.191 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:27:14.191652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.192 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.bytes.delta volume: 2742 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.192 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.192 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.192 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.193 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.193 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.193 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.193 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.193 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.193 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.193 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:27:14.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:27:14 compute-0 nova_compute[194781]: 2025-10-02 19:27:14.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:15 compute-0 nova_compute[194781]: 2025-10-02 19:27:15.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:18 compute-0 nova_compute[194781]: 2025-10-02 19:27:18.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:27:18 compute-0 nova_compute[194781]: 2025-10-02 19:27:18.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:27:18 compute-0 podman[250540]: 2025-10-02 19:27:18.739135873 +0000 UTC m=+0.106004010 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:27:18 compute-0 podman[250541]: 2025-10-02 19:27:18.74268665 +0000 UTC m=+0.091584818 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:27:19 compute-0 nova_compute[194781]: 2025-10-02 19:27:19.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:27:19 compute-0 nova_compute[194781]: 2025-10-02 19:27:19.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:20 compute-0 nova_compute[194781]: 2025-10-02 19:27:20.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:27:20 compute-0 nova_compute[194781]: 2025-10-02 19:27:20.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:22 compute-0 nova_compute[194781]: 2025-10-02 19:27:22.030 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:27:22 compute-0 nova_compute[194781]: 2025-10-02 19:27:22.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:27:22 compute-0 nova_compute[194781]: 2025-10-02 19:27:22.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:27:22 compute-0 nova_compute[194781]: 2025-10-02 19:27:22.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:27:22 compute-0 podman[250585]: 2025-10-02 19:27:22.717725893 +0000 UTC m=+0.085023777 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 19:27:22 compute-0 podman[250586]: 2025-10-02 19:27:22.779599204 +0000 UTC m=+0.145306160 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.062 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.063 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.063 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.063 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.157 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.236 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.237 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.317 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.318 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.419 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.421 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.496 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.505 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.601 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.602 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.668 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.669 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.734 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.736 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.807 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.819 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.915 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.916 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.981 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:23 compute-0 nova_compute[194781]: 2025-10-02 19:27:23.983 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.039 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.040 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.098 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.106 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.176 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.177 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.238 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.239 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.316 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.317 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.377 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.800 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.802 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4666MB free_disk=72.44356918334961GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.802 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.803 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.901 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.902 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance bf3e67ac-baba-4747-bf94-df866e53bdf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.903 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance defe27ca-18ff-45c1-a96c-13a1d0d76474 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.903 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 1399f3a8-2c63-4b73-b015-f96a55b3d59f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.904 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.904 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:27:24 compute-0 nova_compute[194781]: 2025-10-02 19:27:24.993 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:27:25 compute-0 nova_compute[194781]: 2025-10-02 19:27:25.022 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:27:25 compute-0 nova_compute[194781]: 2025-10-02 19:27:25.024 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:27:25 compute-0 nova_compute[194781]: 2025-10-02 19:27:25.025 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:27:25 compute-0 nova_compute[194781]: 2025-10-02 19:27:25.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:27 compute-0 nova_compute[194781]: 2025-10-02 19:27:27.026 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:27:27 compute-0 nova_compute[194781]: 2025-10-02 19:27:27.027 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:27:27 compute-0 nova_compute[194781]: 2025-10-02 19:27:27.412 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:27:27 compute-0 nova_compute[194781]: 2025-10-02 19:27:27.412 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:27:27 compute-0 nova_compute[194781]: 2025-10-02 19:27:27.413 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:27:29 compute-0 nova_compute[194781]: 2025-10-02 19:27:29.429 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Updating instance_info_cache with network_info: [{"id": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "address": "fa:16:3e:6d:6b:b2", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47329f1e-0e", "ovs_interfaceid": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:27:29 compute-0 nova_compute[194781]: 2025-10-02 19:27:29.449 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:27:29 compute-0 nova_compute[194781]: 2025-10-02 19:27:29.450 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:27:29 compute-0 nova_compute[194781]: 2025-10-02 19:27:29.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:29 compute-0 podman[209015]: time="2025-10-02T19:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:27:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:27:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5201 "" "Go-http-client/1.1"
Oct 02 19:27:30 compute-0 nova_compute[194781]: 2025-10-02 19:27:30.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:31 compute-0 openstack_network_exporter[211160]: ERROR   19:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:27:31 compute-0 openstack_network_exporter[211160]: ERROR   19:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:27:31 compute-0 openstack_network_exporter[211160]: ERROR   19:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:27:31 compute-0 openstack_network_exporter[211160]: ERROR   19:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:27:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:27:31 compute-0 openstack_network_exporter[211160]: ERROR   19:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:27:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:27:31 compute-0 podman[250678]: 2025-10-02 19:27:31.695733406 +0000 UTC m=+0.064368683 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:27:34 compute-0 nova_compute[194781]: 2025-10-02 19:27:34.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:35 compute-0 nova_compute[194781]: 2025-10-02 19:27:35.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:39 compute-0 nova_compute[194781]: 2025-10-02 19:27:39.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:39 compute-0 podman[250703]: 2025-10-02 19:27:39.71873212 +0000 UTC m=+0.084326690 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 02 19:27:39 compute-0 podman[250702]: 2025-10-02 19:27:39.726667254 +0000 UTC m=+0.091221529 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:27:40 compute-0 nova_compute[194781]: 2025-10-02 19:27:40.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:42 compute-0 podman[250740]: 2025-10-02 19:27:42.72964537 +0000 UTC m=+0.093791952 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:27:42 compute-0 podman[250742]: 2025-10-02 19:27:42.751583675 +0000 UTC m=+0.096668741 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 02 19:27:42 compute-0 podman[250741]: 2025-10-02 19:27:42.775797487 +0000 UTC m=+0.133343498 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, vcs-type=git)
Oct 02 19:27:44 compute-0 nova_compute[194781]: 2025-10-02 19:27:44.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:45 compute-0 nova_compute[194781]: 2025-10-02 19:27:45.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:27:47.462 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:27:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:27:47.463 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:27:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:27:47.464 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:27:49 compute-0 nova_compute[194781]: 2025-10-02 19:27:49.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:49 compute-0 podman[250801]: 2025-10-02 19:27:49.698322798 +0000 UTC m=+0.071842565 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:27:49 compute-0 podman[250800]: 2025-10-02 19:27:49.698371139 +0000 UTC m=+0.073449678 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:27:50 compute-0 nova_compute[194781]: 2025-10-02 19:27:50.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:53 compute-0 podman[250841]: 2025-10-02 19:27:53.705759768 +0000 UTC m=+0.075529244 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 19:27:53 compute-0 podman[250842]: 2025-10-02 19:27:53.753250215 +0000 UTC m=+0.105565641 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:27:54 compute-0 nova_compute[194781]: 2025-10-02 19:27:54.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:55 compute-0 nova_compute[194781]: 2025-10-02 19:27:55.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:56 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 19:27:59 compute-0 nova_compute[194781]: 2025-10-02 19:27:59.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:27:59 compute-0 podman[209015]: time="2025-10-02T19:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:27:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:27:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5208 "" "Go-http-client/1.1"
Oct 02 19:28:00 compute-0 nova_compute[194781]: 2025-10-02 19:28:00.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:01 compute-0 openstack_network_exporter[211160]: ERROR   19:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:28:01 compute-0 openstack_network_exporter[211160]: ERROR   19:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:28:01 compute-0 openstack_network_exporter[211160]: ERROR   19:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:28:01 compute-0 openstack_network_exporter[211160]: ERROR   19:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:28:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:28:01 compute-0 openstack_network_exporter[211160]: ERROR   19:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:28:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:28:02 compute-0 podman[250884]: 2025-10-02 19:28:02.729154637 +0000 UTC m=+0.094727730 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:28:04 compute-0 nova_compute[194781]: 2025-10-02 19:28:04.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:05 compute-0 nova_compute[194781]: 2025-10-02 19:28:05.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:09 compute-0 nova_compute[194781]: 2025-10-02 19:28:09.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:10 compute-0 nova_compute[194781]: 2025-10-02 19:28:10.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:10 compute-0 podman[250908]: 2025-10-02 19:28:10.708079241 +0000 UTC m=+0.083789046 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3)
Oct 02 19:28:10 compute-0 podman[250909]: 2025-10-02 19:28:10.751778557 +0000 UTC m=+0.110672309 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute)
Oct 02 19:28:13 compute-0 podman[250948]: 2025-10-02 19:28:13.754607668 +0000 UTC m=+0.110550026 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.tags=minimal rhel9, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Oct 02 19:28:13 compute-0 podman[250950]: 2025-10-02 19:28:13.778671006 +0000 UTC m=+0.109336524 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:28:13 compute-0 podman[250949]: 2025-10-02 19:28:13.778714387 +0000 UTC m=+0.116131416 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vcs-type=git, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.expose-services=, version=9.4)
Oct 02 19:28:14 compute-0 nova_compute[194781]: 2025-10-02 19:28:14.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:15 compute-0 nova_compute[194781]: 2025-10-02 19:28:15.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:18 compute-0 nova_compute[194781]: 2025-10-02 19:28:18.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:28:19 compute-0 nova_compute[194781]: 2025-10-02 19:28:19.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:20 compute-0 nova_compute[194781]: 2025-10-02 19:28:20.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:28:20 compute-0 nova_compute[194781]: 2025-10-02 19:28:20.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:28:20 compute-0 nova_compute[194781]: 2025-10-02 19:28:20.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:20 compute-0 podman[251009]: 2025-10-02 19:28:20.691116315 +0000 UTC m=+0.065229146 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:28:20 compute-0 podman[251010]: 2025-10-02 19:28:20.71396542 +0000 UTC m=+0.081108734 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 19:28:21 compute-0 nova_compute[194781]: 2025-10-02 19:28:21.030 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:28:22 compute-0 nova_compute[194781]: 2025-10-02 19:28:22.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:28:22 compute-0 nova_compute[194781]: 2025-10-02 19:28:22.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:28:22 compute-0 nova_compute[194781]: 2025-10-02 19:28:22.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:28:22 compute-0 nova_compute[194781]: 2025-10-02 19:28:22.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:28:22 compute-0 nova_compute[194781]: 2025-10-02 19:28:22.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:28:24 compute-0 nova_compute[194781]: 2025-10-02 19:28:24.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:24 compute-0 podman[251052]: 2025-10-02 19:28:24.729483768 +0000 UTC m=+0.085038250 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 02 19:28:24 compute-0 podman[251053]: 2025-10-02 19:28:24.781838977 +0000 UTC m=+0.124096401 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.032 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.060 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.060 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.060 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.061 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.166 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.267 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.268 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.329 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.330 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.392 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.394 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.459 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.467 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.552 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.554 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.614 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.615 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.673 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.674 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.771 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.778 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.853 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.854 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.912 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.913 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.972 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:25 compute-0 nova_compute[194781]: 2025-10-02 19:28:25.973 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.053 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.060 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.119 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.121 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.180 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.181 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.243 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.244 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.323 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.727 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.729 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4650MB free_disk=72.44356918334961GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.729 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.729 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.841 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.841 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance bf3e67ac-baba-4747-bf94-df866e53bdf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.841 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance defe27ca-18ff-45c1-a96c-13a1d0d76474 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.841 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 1399f3a8-2c63-4b73-b015-f96a55b3d59f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.842 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.842 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:28:26 compute-0 nova_compute[194781]: 2025-10-02 19:28:26.990 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:28:27 compute-0 nova_compute[194781]: 2025-10-02 19:28:27.006 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:28:27 compute-0 nova_compute[194781]: 2025-10-02 19:28:27.009 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:28:27 compute-0 nova_compute[194781]: 2025-10-02 19:28:27.010 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:28:29 compute-0 nova_compute[194781]: 2025-10-02 19:28:29.011 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:28:29 compute-0 nova_compute[194781]: 2025-10-02 19:28:29.012 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:28:29 compute-0 nova_compute[194781]: 2025-10-02 19:28:29.012 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:28:29 compute-0 nova_compute[194781]: 2025-10-02 19:28:29.450 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:28:29 compute-0 nova_compute[194781]: 2025-10-02 19:28:29.450 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:28:29 compute-0 nova_compute[194781]: 2025-10-02 19:28:29.451 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:28:29 compute-0 nova_compute[194781]: 2025-10-02 19:28:29.451 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:28:29 compute-0 nova_compute[194781]: 2025-10-02 19:28:29.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:29 compute-0 podman[209015]: time="2025-10-02T19:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:28:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:28:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5213 "" "Go-http-client/1.1"
Oct 02 19:28:30 compute-0 nova_compute[194781]: 2025-10-02 19:28:30.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:30 compute-0 nova_compute[194781]: 2025-10-02 19:28:30.641 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:28:30 compute-0 nova_compute[194781]: 2025-10-02 19:28:30.683 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:28:30 compute-0 nova_compute[194781]: 2025-10-02 19:28:30.683 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:28:31 compute-0 openstack_network_exporter[211160]: ERROR   19:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:28:31 compute-0 openstack_network_exporter[211160]: ERROR   19:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:28:31 compute-0 openstack_network_exporter[211160]: ERROR   19:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:28:31 compute-0 openstack_network_exporter[211160]: ERROR   19:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:28:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:28:31 compute-0 openstack_network_exporter[211160]: ERROR   19:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:28:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:28:33 compute-0 podman[251140]: 2025-10-02 19:28:33.682232069 +0000 UTC m=+0.059208645 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:28:34 compute-0 nova_compute[194781]: 2025-10-02 19:28:34.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:35 compute-0 nova_compute[194781]: 2025-10-02 19:28:35.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:39 compute-0 nova_compute[194781]: 2025-10-02 19:28:39.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:40 compute-0 nova_compute[194781]: 2025-10-02 19:28:40.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:41 compute-0 podman[251164]: 2025-10-02 19:28:41.717811014 +0000 UTC m=+0.082676036 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Oct 02 19:28:41 compute-0 podman[251163]: 2025-10-02 19:28:41.724950086 +0000 UTC m=+0.086509999 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible)
Oct 02 19:28:44 compute-0 nova_compute[194781]: 2025-10-02 19:28:44.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:44 compute-0 podman[251199]: 2025-10-02 19:28:44.724001602 +0000 UTC m=+0.102524160 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container)
Oct 02 19:28:44 compute-0 podman[251200]: 2025-10-02 19:28:44.739718475 +0000 UTC m=+0.105068129 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, name=ubi9, release-0.7.12=)
Oct 02 19:28:44 compute-0 podman[251201]: 2025-10-02 19:28:44.746319443 +0000 UTC m=+0.112063087 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 19:28:45 compute-0 nova_compute[194781]: 2025-10-02 19:28:45.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:28:47.463 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:28:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:28:47.464 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:28:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:28:47.465 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:28:49 compute-0 nova_compute[194781]: 2025-10-02 19:28:49.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:50 compute-0 nova_compute[194781]: 2025-10-02 19:28:50.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:51 compute-0 podman[251259]: 2025-10-02 19:28:51.691366678 +0000 UTC m=+0.067510198 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:28:51 compute-0 podman[251260]: 2025-10-02 19:28:51.705815336 +0000 UTC m=+0.079751347 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 19:28:54 compute-0 nova_compute[194781]: 2025-10-02 19:28:54.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:55 compute-0 nova_compute[194781]: 2025-10-02 19:28:55.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:55 compute-0 podman[251301]: 2025-10-02 19:28:55.76581722 +0000 UTC m=+0.118687205 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 19:28:55 compute-0 podman[251302]: 2025-10-02 19:28:55.802491348 +0000 UTC m=+0.161120288 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:28:59 compute-0 nova_compute[194781]: 2025-10-02 19:28:59.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:28:59 compute-0 podman[209015]: time="2025-10-02T19:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:28:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:28:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5210 "" "Go-http-client/1.1"
Oct 02 19:29:00 compute-0 nova_compute[194781]: 2025-10-02 19:29:00.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:01 compute-0 openstack_network_exporter[211160]: ERROR   19:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:29:01 compute-0 openstack_network_exporter[211160]: ERROR   19:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:29:01 compute-0 openstack_network_exporter[211160]: ERROR   19:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:29:01 compute-0 openstack_network_exporter[211160]: ERROR   19:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:29:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:29:01 compute-0 openstack_network_exporter[211160]: ERROR   19:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:29:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:29:04 compute-0 nova_compute[194781]: 2025-10-02 19:29:04.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:04 compute-0 podman[251345]: 2025-10-02 19:29:04.737632575 +0000 UTC m=+0.094743600 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:29:05 compute-0 nova_compute[194781]: 2025-10-02 19:29:05.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:09 compute-0 nova_compute[194781]: 2025-10-02 19:29:09.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:10 compute-0 nova_compute[194781]: 2025-10-02 19:29:10.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:12 compute-0 podman[251367]: 2025-10-02 19:29:12.766365629 +0000 UTC m=+0.118506440 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:29:12 compute-0 podman[251368]: 2025-10-02 19:29:12.775844674 +0000 UTC m=+0.136167615 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.941 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.942 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba41f9370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.950 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'defe27ca-18ff-45c1-a96c-13a1d0d76474', 'name': 'vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {'metering.server_group': '1264e536-3255-4eb3-9284-12888e889ce8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.955 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf3e67ac-baba-4747-bf94-df866e53bdf9', 'name': 'vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {'metering.server_group': '1264e536-3255-4eb3-9284-12888e889ce8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.960 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.964 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1399f3a8-2c63-4b73-b015-f96a55b3d59f', 'name': 'vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {'metering.server_group': '1264e536-3255-4eb3-9284-12888e889ce8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.965 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.965 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.965 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.965 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.967 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:29:12.965603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:12.995 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/cpu volume: 36100000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.031 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/cpu volume: 252490000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.061 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 40460000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.098 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/cpu volume: 33210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.099 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.100 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.100 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.100 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.100 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.100 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.101 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/memory.usage volume: 49.00390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.101 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:29:13.100642) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.102 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.103 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.103 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.103 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.103 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.103 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:29:13.103420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.108 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.112 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets volume: 56 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.116 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.120 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.121 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.121 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.122 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.122 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.122 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.bytes volume: 1870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.123 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.bytes volume: 8664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.123 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2436 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:29:13.122423) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.124 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.bytes volume: 1786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.125 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.125 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.126 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.126 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.127 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.128 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.128 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.129 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.129 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.129 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.130 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.130 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.130 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.132 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.132 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:29:13.125647) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.132 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:29:13.129274) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.132 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.133 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets volume: 67 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.133 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:29:13.132647) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.133 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.134 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.134 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.135 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.135 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.135 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.135 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:29:13.135453) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.181 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.181 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.181 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.228 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.229 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.229 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.274 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.274 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.274 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.320 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.320 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.320 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.321 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.321 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.321 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.321 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.322 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.322 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:29:13.322057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.323 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.323 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.324 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.324 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:29:13.324460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.324 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.bytes volume: 7634 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.325 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.325 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.325 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.326 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.326 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.326 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.326 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:29:13.326573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.360 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.361 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.361 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.382 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.382 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.382 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.405 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.405 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.406 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.430 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.431 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.431 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.432 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.433 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.latency volume: 864994696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:29:13.432435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.433 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.latency volume: 104660889 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.433 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.latency volume: 104208362 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.433 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 661561745 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.434 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 116074178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.434 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.latency volume: 93869390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.434 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.434 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.434 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.435 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.latency volume: 670468778 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.435 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.latency volume: 113543433 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.435 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.latency volume: 206559376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.436 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.436 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.436 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.436 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.436 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.437 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:29:13.436820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.437 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.437 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.438 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.438 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.438 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.438 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.438 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.439 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.439 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.439 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.439 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.440 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.440 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.440 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.440 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.440 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.441 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:29:13.440561) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.441 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.441 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.441 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.441 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.442 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.442 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.442 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.442 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.442 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.443 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.443 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:29:13.442899) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.443 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.443 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.443 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.444 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.444 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.444 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.444 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.444 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.445 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.445 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.445 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.445 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.446 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.446 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.446 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.446 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.446 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.446 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:29:13.446303) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.446 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.447 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.447 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.447 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.447 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.447 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.448 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.448 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.448 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.448 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.448 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.449 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.449 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.449 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.449 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.449 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.449 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.450 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.latency volume: 2502666553 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.450 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:29:13.449691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.450 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.latency volume: 10231196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.450 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.450 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 1359285126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.450 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 10614908 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.451 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.451 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.451 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.451 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.451 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.latency volume: 3305858753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.452 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.latency volume: 11917091 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.452 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.452 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.453 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.453 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.453 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.453 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.453 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:29:13.453212) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.453 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.453 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.454 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.454 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.454 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.454 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.454 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.455 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.455 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.455 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.455 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.456 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.456 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.457 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:29:13.456630) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.457 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.457 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.457 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.457 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.458 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.458 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.458 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.458 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.458 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.458 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.459 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.459 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.459 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.459 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.459 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.460 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.460 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:29:13.459984) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.460 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.460 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.460 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.461 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.461 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.461 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.461 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.461 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.462 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:29:13.461737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.462 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.462 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.462 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.463 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.463 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.463 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:29:13.463094) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.464 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.464 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.464 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.464 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.464 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.464 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.464 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:29:13.464424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.465 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.465 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.465 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.465 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.465 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.466 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.466 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:29:13.466091) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.466 14 DEBUG ceilometer.compute.pollsters [-] bf3e67ac-baba-4747-bf94-df866e53bdf9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.466 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.467 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.467 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.467 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.467 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.467 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:29:13.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:29:14 compute-0 nova_compute[194781]: 2025-10-02 19:29:14.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:15 compute-0 nova_compute[194781]: 2025-10-02 19:29:15.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:15 compute-0 podman[251407]: 2025-10-02 19:29:15.760656623 +0000 UTC m=+0.117132943 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 02 19:29:15 compute-0 podman[251405]: 2025-10-02 19:29:15.76832657 +0000 UTC m=+0.141343985 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 02 19:29:15 compute-0 podman[251406]: 2025-10-02 19:29:15.800683361 +0000 UTC m=+0.157572312 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, container_name=kepler, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:29:18 compute-0 nova_compute[194781]: 2025-10-02 19:29:18.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:29:19 compute-0 nova_compute[194781]: 2025-10-02 19:29:19.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:20 compute-0 nova_compute[194781]: 2025-10-02 19:29:20.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:29:20 compute-0 nova_compute[194781]: 2025-10-02 19:29:20.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:21 compute-0 nova_compute[194781]: 2025-10-02 19:29:21.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:29:22 compute-0 nova_compute[194781]: 2025-10-02 19:29:22.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:29:22 compute-0 nova_compute[194781]: 2025-10-02 19:29:22.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:29:22 compute-0 nova_compute[194781]: 2025-10-02 19:29:22.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:29:22 compute-0 unix_chkpwd[251465]: password check failed for user (root)
Oct 02 19:29:22 compute-0 sshd-session[251463]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:29:22 compute-0 podman[251467]: 2025-10-02 19:29:22.723736354 +0000 UTC m=+0.086761146 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 19:29:22 compute-0 podman[251466]: 2025-10-02 19:29:22.753987728 +0000 UTC m=+0.122602091 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:29:23 compute-0 nova_compute[194781]: 2025-10-02 19:29:23.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:29:23 compute-0 nova_compute[194781]: 2025-10-02 19:29:23.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:29:24 compute-0 sshd-session[251463]: Failed password for root from 91.224.92.108 port 27404 ssh2
Oct 02 19:29:24 compute-0 nova_compute[194781]: 2025-10-02 19:29:24.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:25 compute-0 unix_chkpwd[251508]: password check failed for user (root)
Oct 02 19:29:25 compute-0 nova_compute[194781]: 2025-10-02 19:29:25.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.067 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.067 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.068 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.068 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.161 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.224 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.226 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.294 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.296 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.360 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.361 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.427 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.434 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.498 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.501 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.577 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.587 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.652 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.653 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.737 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:26 compute-0 podman[251529]: 2025-10-02 19:29:26.741695824 +0000 UTC m=+0.106913109 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.748 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:26 compute-0 podman[251530]: 2025-10-02 19:29:26.760702245 +0000 UTC m=+0.126293130 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.827 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.829 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.918 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.919 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.983 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:26 compute-0 nova_compute[194781]: 2025-10-02 19:29:26.985 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.083 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.092 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.164 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.165 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.232 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.233 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.321 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.322 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.399 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:29:27 compute-0 sshd-session[251463]: Failed password for root from 91.224.92.108 port 27404 ssh2
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.770 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.772 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4603MB free_disk=72.43966293334961GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.772 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.772 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.859 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.859 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance bf3e67ac-baba-4747-bf94-df866e53bdf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.859 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance defe27ca-18ff-45c1-a96c-13a1d0d76474 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.859 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 1399f3a8-2c63-4b73-b015-f96a55b3d59f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.860 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.860 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:29:27 compute-0 nova_compute[194781]: 2025-10-02 19:29:27.993 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:29:28 compute-0 nova_compute[194781]: 2025-10-02 19:29:28.011 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:29:28 compute-0 nova_compute[194781]: 2025-10-02 19:29:28.014 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:29:28 compute-0 nova_compute[194781]: 2025-10-02 19:29:28.014 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:29:28 compute-0 unix_chkpwd[251600]: password check failed for user (root)
Oct 02 19:29:29 compute-0 nova_compute[194781]: 2025-10-02 19:29:29.014 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:29:29 compute-0 nova_compute[194781]: 2025-10-02 19:29:29.015 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:29:29 compute-0 nova_compute[194781]: 2025-10-02 19:29:29.492 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:29:29 compute-0 nova_compute[194781]: 2025-10-02 19:29:29.493 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:29:29 compute-0 nova_compute[194781]: 2025-10-02 19:29:29.493 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:29:29 compute-0 nova_compute[194781]: 2025-10-02 19:29:29.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:29 compute-0 sshd-session[251463]: Failed password for root from 91.224.92.108 port 27404 ssh2
Oct 02 19:29:29 compute-0 podman[209015]: time="2025-10-02T19:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:29:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:29:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5205 "" "Go-http-client/1.1"
Oct 02 19:29:30 compute-0 sshd-session[251463]: Received disconnect from 91.224.92.108 port 27404:11:  [preauth]
Oct 02 19:29:30 compute-0 sshd-session[251463]: Disconnected from authenticating user root 91.224.92.108 port 27404 [preauth]
Oct 02 19:29:30 compute-0 sshd-session[251463]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:29:30 compute-0 nova_compute[194781]: 2025-10-02 19:29:30.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:30 compute-0 nova_compute[194781]: 2025-10-02 19:29:30.683 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Updating instance_info_cache with network_info: [{"id": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "address": "fa:16:3e:28:95:b6", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdff3ea95-fa", "ovs_interfaceid": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:29:30 compute-0 nova_compute[194781]: 2025-10-02 19:29:30.704 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:29:30 compute-0 nova_compute[194781]: 2025-10-02 19:29:30.705 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:29:30 compute-0 unix_chkpwd[251603]: password check failed for user (root)
Oct 02 19:29:30 compute-0 sshd-session[251601]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:29:31 compute-0 openstack_network_exporter[211160]: ERROR   19:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:29:31 compute-0 openstack_network_exporter[211160]: ERROR   19:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:29:31 compute-0 openstack_network_exporter[211160]: ERROR   19:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:29:31 compute-0 openstack_network_exporter[211160]: ERROR   19:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:29:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:29:31 compute-0 openstack_network_exporter[211160]: ERROR   19:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:29:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:29:32 compute-0 sshd-session[251601]: Failed password for root from 91.224.92.108 port 46688 ssh2
Oct 02 19:29:33 compute-0 unix_chkpwd[251604]: password check failed for user (root)
Oct 02 19:29:34 compute-0 nova_compute[194781]: 2025-10-02 19:29:34.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:35 compute-0 nova_compute[194781]: 2025-10-02 19:29:35.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:35 compute-0 podman[251605]: 2025-10-02 19:29:35.706304393 +0000 UTC m=+0.082373508 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:29:36 compute-0 sshd-session[251601]: Failed password for root from 91.224.92.108 port 46688 ssh2
Oct 02 19:29:37 compute-0 unix_chkpwd[251627]: password check failed for user (root)
Oct 02 19:29:39 compute-0 sshd-session[251601]: Failed password for root from 91.224.92.108 port 46688 ssh2
Oct 02 19:29:39 compute-0 nova_compute[194781]: 2025-10-02 19:29:39.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.267 2 DEBUG oslo_concurrency.lockutils [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "bf3e67ac-baba-4747-bf94-df866e53bdf9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.268 2 DEBUG oslo_concurrency.lockutils [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.268 2 DEBUG oslo_concurrency.lockutils [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.269 2 DEBUG oslo_concurrency.lockutils [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.269 2 DEBUG oslo_concurrency.lockutils [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:29:40 compute-0 sshd-session[251601]: Received disconnect from 91.224.92.108 port 46688:11:  [preauth]
Oct 02 19:29:40 compute-0 sshd-session[251601]: Disconnected from authenticating user root 91.224.92.108 port 46688 [preauth]
Oct 02 19:29:40 compute-0 sshd-session[251601]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.274 2 INFO nova.compute.manager [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Terminating instance
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.276 2 DEBUG nova.compute.manager [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:29:40 compute-0 kernel: tapdff3ea95-fa (unregistering): left promiscuous mode
Oct 02 19:29:40 compute-0 NetworkManager[52324]: <info>  [1759433380.3304] device (tapdff3ea95-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:29:40 compute-0 ovn_controller[97052]: 2025-10-02T19:29:40Z|00050|binding|INFO|Releasing lport dff3ea95-fab2-4bcb-9315-6a89cf30ad89 from this chassis (sb_readonly=0)
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:40 compute-0 ovn_controller[97052]: 2025-10-02T19:29:40Z|00051|binding|INFO|Setting lport dff3ea95-fab2-4bcb-9315-6a89cf30ad89 down in Southbound
Oct 02 19:29:40 compute-0 ovn_controller[97052]: 2025-10-02T19:29:40Z|00052|binding|INFO|Removing iface tapdff3ea95-fa ovn-installed in OVS
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:40.362 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:95:b6 192.168.0.239'], port_security=['fa:16:3e:28:95:b6 192.168.0.239'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-kewzjvdnt5lz-ntmph4mpmrke-cvcdtekesgtz-port-adgemvcynrcg', 'neutron:cidrs': '192.168.0.239/24', 'neutron:device_id': 'bf3e67ac-baba-4747-bf94-df866e53bdf9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5760fda-9195-4e68-8506-4362bf1edf4f', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-kewzjvdnt5lz-ntmph4mpmrke-cvcdtekesgtz-port-adgemvcynrcg', 'neutron:project_id': 'c6bd7784161a4cc3a2e8715feee92228', 'neutron:revision_number': '4', 'neutron:security_group_ids': '72aaa87c-2798-4a9c-ab16-34693e3fe341', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21963977-c089-41a8-8d06-e659a781ceff, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=dff3ea95-fab2-4bcb-9315-6a89cf30ad89) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:29:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:40.364 105943 INFO neutron.agent.ovn.metadata.agent [-] Port dff3ea95-fab2-4bcb-9315-6a89cf30ad89 in datapath b5760fda-9195-4e68-8506-4362bf1edf4f unbound from our chassis
Oct 02 19:29:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:40.366 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5760fda-9195-4e68-8506-4362bf1edf4f
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:40 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct 02 19:29:40 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 5min 12.487s CPU time.
Oct 02 19:29:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:40.403 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[28e8cff1-07cb-499b-b00c-5e3f8ea6a4aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:29:40 compute-0 systemd-machined[154795]: Machine qemu-2-instance-00000002 terminated.
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:40.450 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[51031c03-5da1-44a1-9181-c9d230af3360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:29:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:40.453 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c01074-cbf4-4ab8-bc88-4fedea49b1a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:29:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:40.487 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[1e00caaf-45ed-40e7-ba42-bdb9f0c8beda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:40.508 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[864852e2-63ca-4da0-af7e-4561911e4990]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5760fda-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:0b:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394420, 'reachable_time': 25007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251644, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:40.533 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[4ebf52b1-f00d-42dd-be93-d59622075cca]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapb5760fda-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394432, 'tstamp': 394432}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251649, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5760fda-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394434, 'tstamp': 394434}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251649, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:29:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:40.535 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5760fda-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:40.545 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5760fda-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:29:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:40.546 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:29:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:40.546 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5760fda-90, col_values=(('external_ids', {'iface-id': '8a91c2ef-c369-46ce-8154-e9505f04ef0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:29:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:40.546 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.572 2 INFO nova.virt.libvirt.driver [-] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Instance destroyed successfully.
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.572 2 DEBUG nova.objects.instance [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lazy-loading 'resources' on Instance uuid bf3e67ac-baba-4747-bf94-df866e53bdf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.590 2 DEBUG nova.virt.libvirt.vif [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:20:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vdnt5lz-ntmph4mpmrke-cvcdtekesgtz-vnf-npbodekxu2s6',id=2,image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:20:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1264e536-3255-4eb3-9284-12888e889ce8'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6bd7784161a4cc3a2e8715feee92228',ramdisk_id='',reservation_id='r-gmmcx4ea',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:20:48Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yNzM3NjU4MzczMzI4NTM3NTY3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI3Mzc2NTgzNzMzMjg1Mzc1Njc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjczNzY1ODM3MzMyODUzNzU2Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI3Mzc2NTgzNzMzMjg1Mzc1Njc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yNzM3NjU4MzczMzI4NTM3NTY3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yNzM3NjU4MzczMzI4NTM3NTY3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1
Oct 02 19:29:40 compute-0 nova_compute[194781]: xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjczNzY1ODM3MzMyODUzNzU2Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI3Mzc2NTgzNzMzMjg1Mzc1Njc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yNzM3NjU4MzczMzI4NTM3NTY3PT0tLQo=',user_id='5e0565a40c4e40f9ab77ce190f9527c5',uuid=bf3e67ac-baba-4747-bf94-df866e53bdf9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "address": "fa:16:3e:28:95:b6", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdff3ea95-fa", "ovs_interfaceid": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.591 2 DEBUG nova.network.os_vif_util [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converting VIF {"id": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "address": "fa:16:3e:28:95:b6", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdff3ea95-fa", "ovs_interfaceid": "dff3ea95-fab2-4bcb-9315-6a89cf30ad89", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.592 2 DEBUG nova.network.os_vif_util [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:95:b6,bridge_name='br-int',has_traffic_filtering=True,id=dff3ea95-fab2-4bcb-9315-6a89cf30ad89,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdff3ea95-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.592 2 DEBUG os_vif [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:95:b6,bridge_name='br-int',has_traffic_filtering=True,id=dff3ea95-fab2-4bcb-9315-6a89cf30ad89,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdff3ea95-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdff3ea95-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.599 2 INFO os_vif [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:95:b6,bridge_name='br-int',has_traffic_filtering=True,id=dff3ea95-fab2-4bcb-9315-6a89cf30ad89,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdff3ea95-fa')
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.600 2 INFO nova.virt.libvirt.driver [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Deleting instance files /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9_del
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.600 2 INFO nova.virt.libvirt.driver [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Deletion of /var/lib/nova/instances/bf3e67ac-baba-4747-bf94-df866e53bdf9_del complete
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.668 2 DEBUG nova.virt.libvirt.host [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.668 2 INFO nova.virt.libvirt.host [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] UEFI support detected
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.671 2 INFO nova.compute.manager [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Took 0.39 seconds to destroy the instance on the hypervisor.
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.672 2 DEBUG oslo.service.loopingcall [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.673 2 DEBUG nova.compute.manager [-] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.673 2 DEBUG nova.network.neutron [-] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.787 2 DEBUG nova.compute.manager [req-2846497f-483b-4dbd-a31b-4eaf0741fe31 req-a1fa3c70-6e8f-496f-b8d4-629cbef4d036 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Received event network-vif-unplugged-dff3ea95-fab2-4bcb-9315-6a89cf30ad89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.788 2 DEBUG oslo_concurrency.lockutils [req-2846497f-483b-4dbd-a31b-4eaf0741fe31 req-a1fa3c70-6e8f-496f-b8d4-629cbef4d036 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.788 2 DEBUG oslo_concurrency.lockutils [req-2846497f-483b-4dbd-a31b-4eaf0741fe31 req-a1fa3c70-6e8f-496f-b8d4-629cbef4d036 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.788 2 DEBUG oslo_concurrency.lockutils [req-2846497f-483b-4dbd-a31b-4eaf0741fe31 req-a1fa3c70-6e8f-496f-b8d4-629cbef4d036 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.789 2 DEBUG nova.compute.manager [req-2846497f-483b-4dbd-a31b-4eaf0741fe31 req-a1fa3c70-6e8f-496f-b8d4-629cbef4d036 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] No waiting events found dispatching network-vif-unplugged-dff3ea95-fab2-4bcb-9315-6a89cf30ad89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:29:40 compute-0 nova_compute[194781]: 2025-10-02 19:29:40.789 2 DEBUG nova.compute.manager [req-2846497f-483b-4dbd-a31b-4eaf0741fe31 req-a1fa3c70-6e8f-496f-b8d4-629cbef4d036 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Received event network-vif-unplugged-dff3ea95-fab2-4bcb-9315-6a89cf30ad89 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 19:29:40 compute-0 rsyslogd[243731]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:29:40.590 2 DEBUG nova.virt.libvirt.vif [None req-83e259cc-5ee1-44 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:29:41 compute-0 unix_chkpwd[251666]: password check failed for user (root)
Oct 02 19:29:41 compute-0 sshd-session[251632]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:29:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:41.273 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:29:41 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:41.274 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:29:41 compute-0 nova_compute[194781]: 2025-10-02 19:29:41.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:42 compute-0 sshd-session[251632]: Failed password for root from 91.224.92.108 port 57086 ssh2
Oct 02 19:29:42 compute-0 nova_compute[194781]: 2025-10-02 19:29:42.911 2 DEBUG nova.network.neutron [-] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:29:42 compute-0 nova_compute[194781]: 2025-10-02 19:29:42.918 2 DEBUG nova.compute.manager [req-4ea716b7-ec38-4744-848a-645294550a9e req-cc4e3fa9-ee8f-4e8a-b3d2-00c3c408a752 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Received event network-changed-dff3ea95-fab2-4bcb-9315-6a89cf30ad89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:29:42 compute-0 nova_compute[194781]: 2025-10-02 19:29:42.918 2 DEBUG nova.compute.manager [req-4ea716b7-ec38-4744-848a-645294550a9e req-cc4e3fa9-ee8f-4e8a-b3d2-00c3c408a752 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Refreshing instance network info cache due to event network-changed-dff3ea95-fab2-4bcb-9315-6a89cf30ad89. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:29:42 compute-0 nova_compute[194781]: 2025-10-02 19:29:42.919 2 DEBUG oslo_concurrency.lockutils [req-4ea716b7-ec38-4744-848a-645294550a9e req-cc4e3fa9-ee8f-4e8a-b3d2-00c3c408a752 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:29:42 compute-0 nova_compute[194781]: 2025-10-02 19:29:42.919 2 DEBUG oslo_concurrency.lockutils [req-4ea716b7-ec38-4744-848a-645294550a9e req-cc4e3fa9-ee8f-4e8a-b3d2-00c3c408a752 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:29:42 compute-0 nova_compute[194781]: 2025-10-02 19:29:42.920 2 DEBUG nova.network.neutron [req-4ea716b7-ec38-4744-848a-645294550a9e req-cc4e3fa9-ee8f-4e8a-b3d2-00c3c408a752 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Refreshing network info cache for port dff3ea95-fab2-4bcb-9315-6a89cf30ad89 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:29:42 compute-0 nova_compute[194781]: 2025-10-02 19:29:42.952 2 INFO nova.compute.manager [-] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Took 2.28 seconds to deallocate network for instance.
Oct 02 19:29:43 compute-0 nova_compute[194781]: 2025-10-02 19:29:43.014 2 DEBUG oslo_concurrency.lockutils [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:29:43 compute-0 nova_compute[194781]: 2025-10-02 19:29:43.015 2 DEBUG oslo_concurrency.lockutils [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:29:43 compute-0 nova_compute[194781]: 2025-10-02 19:29:43.160 2 DEBUG nova.compute.provider_tree [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:29:43 compute-0 nova_compute[194781]: 2025-10-02 19:29:43.184 2 DEBUG nova.scheduler.client.report [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:29:43 compute-0 nova_compute[194781]: 2025-10-02 19:29:43.233 2 DEBUG oslo_concurrency.lockutils [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:29:43 compute-0 nova_compute[194781]: 2025-10-02 19:29:43.301 2 INFO nova.scheduler.client.report [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Deleted allocations for instance bf3e67ac-baba-4747-bf94-df866e53bdf9
Oct 02 19:29:43 compute-0 nova_compute[194781]: 2025-10-02 19:29:43.315 2 DEBUG nova.network.neutron [req-4ea716b7-ec38-4744-848a-645294550a9e req-cc4e3fa9-ee8f-4e8a-b3d2-00c3c408a752 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:29:43 compute-0 nova_compute[194781]: 2025-10-02 19:29:43.393 2 DEBUG oslo_concurrency.lockutils [None req-83e259cc-5ee1-449f-8c55-188403c05f3b 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:29:43 compute-0 podman[251667]: 2025-10-02 19:29:43.726411186 +0000 UTC m=+0.094108704 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 19:29:43 compute-0 podman[251668]: 2025-10-02 19:29:43.726704534 +0000 UTC m=+0.091881064 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute)
Oct 02 19:29:44 compute-0 unix_chkpwd[251703]: password check failed for user (root)
Oct 02 19:29:44 compute-0 nova_compute[194781]: 2025-10-02 19:29:44.770 2 DEBUG nova.network.neutron [req-4ea716b7-ec38-4744-848a-645294550a9e req-cc4e3fa9-ee8f-4e8a-b3d2-00c3c408a752 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:29:44 compute-0 nova_compute[194781]: 2025-10-02 19:29:44.832 2 DEBUG oslo_concurrency.lockutils [req-4ea716b7-ec38-4744-848a-645294550a9e req-cc4e3fa9-ee8f-4e8a-b3d2-00c3c408a752 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-bf3e67ac-baba-4747-bf94-df866e53bdf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:29:44 compute-0 nova_compute[194781]: 2025-10-02 19:29:44.832 2 DEBUG nova.compute.manager [req-4ea716b7-ec38-4744-848a-645294550a9e req-cc4e3fa9-ee8f-4e8a-b3d2-00c3c408a752 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Received event network-vif-plugged-dff3ea95-fab2-4bcb-9315-6a89cf30ad89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:29:44 compute-0 nova_compute[194781]: 2025-10-02 19:29:44.833 2 DEBUG oslo_concurrency.lockutils [req-4ea716b7-ec38-4744-848a-645294550a9e req-cc4e3fa9-ee8f-4e8a-b3d2-00c3c408a752 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:29:44 compute-0 nova_compute[194781]: 2025-10-02 19:29:44.833 2 DEBUG oslo_concurrency.lockutils [req-4ea716b7-ec38-4744-848a-645294550a9e req-cc4e3fa9-ee8f-4e8a-b3d2-00c3c408a752 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:29:44 compute-0 nova_compute[194781]: 2025-10-02 19:29:44.833 2 DEBUG oslo_concurrency.lockutils [req-4ea716b7-ec38-4744-848a-645294550a9e req-cc4e3fa9-ee8f-4e8a-b3d2-00c3c408a752 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "bf3e67ac-baba-4747-bf94-df866e53bdf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:29:44 compute-0 nova_compute[194781]: 2025-10-02 19:29:44.834 2 DEBUG nova.compute.manager [req-4ea716b7-ec38-4744-848a-645294550a9e req-cc4e3fa9-ee8f-4e8a-b3d2-00c3c408a752 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] No waiting events found dispatching network-vif-plugged-dff3ea95-fab2-4bcb-9315-6a89cf30ad89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:29:44 compute-0 nova_compute[194781]: 2025-10-02 19:29:44.834 2 WARNING nova.compute.manager [req-4ea716b7-ec38-4744-848a-645294550a9e req-cc4e3fa9-ee8f-4e8a-b3d2-00c3c408a752 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Received unexpected event network-vif-plugged-dff3ea95-fab2-4bcb-9315-6a89cf30ad89 for instance with vm_state active and task_state deleting.
Oct 02 19:29:45 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:45.276 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:29:45 compute-0 nova_compute[194781]: 2025-10-02 19:29:45.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:45 compute-0 nova_compute[194781]: 2025-10-02 19:29:45.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:46 compute-0 sshd-session[251632]: Failed password for root from 91.224.92.108 port 57086 ssh2
Oct 02 19:29:46 compute-0 podman[251704]: 2025-10-02 19:29:46.712722425 +0000 UTC m=+0.076853860 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-type=git, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public)
Oct 02 19:29:46 compute-0 podman[251706]: 2025-10-02 19:29:46.74077509 +0000 UTC m=+0.081168576 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:29:46 compute-0 podman[251705]: 2025-10-02 19:29:46.740947564 +0000 UTC m=+0.083606471 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, managed_by=edpm_ansible, name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1214.1726694543)
Oct 02 19:29:47 compute-0 unix_chkpwd[251760]: password check failed for user (root)
Oct 02 19:29:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:47.464 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:29:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:47.465 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:29:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:29:47.466 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:29:48 compute-0 sshd-session[251632]: Failed password for root from 91.224.92.108 port 57086 ssh2
Oct 02 19:29:50 compute-0 nova_compute[194781]: 2025-10-02 19:29:50.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:50 compute-0 sshd-session[251632]: Received disconnect from 91.224.92.108 port 57086:11:  [preauth]
Oct 02 19:29:50 compute-0 sshd-session[251632]: Disconnected from authenticating user root 91.224.92.108 port 57086 [preauth]
Oct 02 19:29:50 compute-0 sshd-session[251632]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:29:50 compute-0 nova_compute[194781]: 2025-10-02 19:29:50.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:53 compute-0 podman[251764]: 2025-10-02 19:29:53.751782291 +0000 UTC m=+0.096347663 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:29:53 compute-0 podman[251763]: 2025-10-02 19:29:53.764564895 +0000 UTC m=+0.116702461 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:29:55 compute-0 nova_compute[194781]: 2025-10-02 19:29:55.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:55 compute-0 nova_compute[194781]: 2025-10-02 19:29:55.569 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759433380.5674043, bf3e67ac-baba-4747-bf94-df866e53bdf9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:29:55 compute-0 nova_compute[194781]: 2025-10-02 19:29:55.569 2 INFO nova.compute.manager [-] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] VM Stopped (Lifecycle Event)
Oct 02 19:29:55 compute-0 nova_compute[194781]: 2025-10-02 19:29:55.590 2 DEBUG nova.compute.manager [None req-fa7d6536-e84b-4b11-b694-65ff5904f359 - - - - - -] [instance: bf3e67ac-baba-4747-bf94-df866e53bdf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:29:55 compute-0 nova_compute[194781]: 2025-10-02 19:29:55.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:29:57 compute-0 podman[251804]: 2025-10-02 19:29:57.708334057 +0000 UTC m=+0.089045918 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:29:57 compute-0 podman[251805]: 2025-10-02 19:29:57.737898432 +0000 UTC m=+0.114417010 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct 02 19:29:59 compute-0 podman[209015]: time="2025-10-02T19:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:29:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:29:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5206 "" "Go-http-client/1.1"
Oct 02 19:30:00 compute-0 nova_compute[194781]: 2025-10-02 19:30:00.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:00 compute-0 nova_compute[194781]: 2025-10-02 19:30:00.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:01 compute-0 openstack_network_exporter[211160]: ERROR   19:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:30:01 compute-0 openstack_network_exporter[211160]: ERROR   19:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:30:01 compute-0 openstack_network_exporter[211160]: ERROR   19:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:30:01 compute-0 openstack_network_exporter[211160]: ERROR   19:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:30:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:30:01 compute-0 openstack_network_exporter[211160]: ERROR   19:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:30:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:30:05 compute-0 nova_compute[194781]: 2025-10-02 19:30:05.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:05 compute-0 nova_compute[194781]: 2025-10-02 19:30:05.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:06 compute-0 podman[251848]: 2025-10-02 19:30:06.716598312 +0000 UTC m=+0.076217103 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:30:10 compute-0 nova_compute[194781]: 2025-10-02 19:30:10.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:10 compute-0 nova_compute[194781]: 2025-10-02 19:30:10.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:14 compute-0 podman[251871]: 2025-10-02 19:30:14.724878271 +0000 UTC m=+0.092850099 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 19:30:14 compute-0 podman[251872]: 2025-10-02 19:30:14.771365293 +0000 UTC m=+0.134077390 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:30:15 compute-0 nova_compute[194781]: 2025-10-02 19:30:15.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:15 compute-0 nova_compute[194781]: 2025-10-02 19:30:15.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:15 compute-0 ovn_controller[97052]: 2025-10-02T19:30:15Z|00053|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory
Oct 02 19:30:17 compute-0 podman[251907]: 2025-10-02 19:30:17.720755789 +0000 UTC m=+0.091220586 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Oct 02 19:30:17 compute-0 podman[251908]: 2025-10-02 19:30:17.751863866 +0000 UTC m=+0.104162234 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, vcs-type=git, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, config_id=edpm, container_name=kepler, release=1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, release-0.7.12=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container)
Oct 02 19:30:17 compute-0 podman[251909]: 2025-10-02 19:30:17.752047081 +0000 UTC m=+0.110348811 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:30:20 compute-0 nova_compute[194781]: 2025-10-02 19:30:20.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:30:20 compute-0 nova_compute[194781]: 2025-10-02 19:30:20.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:20 compute-0 nova_compute[194781]: 2025-10-02 19:30:20.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:22 compute-0 nova_compute[194781]: 2025-10-02 19:30:22.030 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:30:22 compute-0 nova_compute[194781]: 2025-10-02 19:30:22.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:30:22 compute-0 nova_compute[194781]: 2025-10-02 19:30:22.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:30:22 compute-0 nova_compute[194781]: 2025-10-02 19:30:22.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:30:22 compute-0 nova_compute[194781]: 2025-10-02 19:30:22.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 19:30:22 compute-0 nova_compute[194781]: 2025-10-02 19:30:22.055 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 19:30:23 compute-0 nova_compute[194781]: 2025-10-02 19:30:23.055 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:30:24 compute-0 nova_compute[194781]: 2025-10-02 19:30:24.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:30:24 compute-0 nova_compute[194781]: 2025-10-02 19:30:24.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:30:24 compute-0 nova_compute[194781]: 2025-10-02 19:30:24.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:30:24 compute-0 podman[251968]: 2025-10-02 19:30:24.702343704 +0000 UTC m=+0.068701120 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:30:24 compute-0 podman[251969]: 2025-10-02 19:30:24.707720335 +0000 UTC m=+0.074001749 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 02 19:30:25 compute-0 nova_compute[194781]: 2025-10-02 19:30:25.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:25 compute-0 nova_compute[194781]: 2025-10-02 19:30:25.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.030 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.189 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.218 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.218 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.219 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.219 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.352 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.453 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.454 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.527 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.529 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.597 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.598 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.721 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.737 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.822 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.823 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.901 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.902 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.970 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:30:26 compute-0 nova_compute[194781]: 2025-10-02 19:30:26.972 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:30:27 compute-0 nova_compute[194781]: 2025-10-02 19:30:27.037 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:30:27 compute-0 nova_compute[194781]: 2025-10-02 19:30:27.047 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:30:27 compute-0 nova_compute[194781]: 2025-10-02 19:30:27.141 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:30:27 compute-0 nova_compute[194781]: 2025-10-02 19:30:27.142 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:30:27 compute-0 nova_compute[194781]: 2025-10-02 19:30:27.203 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:30:27 compute-0 nova_compute[194781]: 2025-10-02 19:30:27.205 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:30:27 compute-0 nova_compute[194781]: 2025-10-02 19:30:27.272 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:30:27 compute-0 nova_compute[194781]: 2025-10-02 19:30:27.274 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:30:27 compute-0 nova_compute[194781]: 2025-10-02 19:30:27.365 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:30:27 compute-0 nova_compute[194781]: 2025-10-02 19:30:27.816 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:30:27 compute-0 nova_compute[194781]: 2025-10-02 19:30:27.817 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4775MB free_disk=72.46216201782227GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:30:27 compute-0 nova_compute[194781]: 2025-10-02 19:30:27.818 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:30:27 compute-0 nova_compute[194781]: 2025-10-02 19:30:27.818 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:30:28 compute-0 nova_compute[194781]: 2025-10-02 19:30:28.025 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:30:28 compute-0 nova_compute[194781]: 2025-10-02 19:30:28.026 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance defe27ca-18ff-45c1-a96c-13a1d0d76474 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:30:28 compute-0 nova_compute[194781]: 2025-10-02 19:30:28.026 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 1399f3a8-2c63-4b73-b015-f96a55b3d59f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:30:28 compute-0 nova_compute[194781]: 2025-10-02 19:30:28.026 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:30:28 compute-0 nova_compute[194781]: 2025-10-02 19:30:28.027 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:30:28 compute-0 nova_compute[194781]: 2025-10-02 19:30:28.110 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing inventories for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 19:30:28 compute-0 nova_compute[194781]: 2025-10-02 19:30:28.201 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating ProviderTree inventory for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 19:30:28 compute-0 nova_compute[194781]: 2025-10-02 19:30:28.201 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:30:28 compute-0 nova_compute[194781]: 2025-10-02 19:30:28.216 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing aggregate associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 19:30:28 compute-0 nova_compute[194781]: 2025-10-02 19:30:28.242 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing trait associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,HW_CPU_X86_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 19:30:28 compute-0 nova_compute[194781]: 2025-10-02 19:30:28.322 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:30:28 compute-0 nova_compute[194781]: 2025-10-02 19:30:28.340 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:30:28 compute-0 nova_compute[194781]: 2025-10-02 19:30:28.362 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:30:28 compute-0 nova_compute[194781]: 2025-10-02 19:30:28.362 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:30:28 compute-0 podman[252045]: 2025-10-02 19:30:28.777453572 +0000 UTC m=+0.133917917 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:30:28 compute-0 podman[252046]: 2025-10-02 19:30:28.786051537 +0000 UTC m=+0.145618304 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 19:30:29 compute-0 podman[209015]: time="2025-10-02T19:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:30:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:30:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5201 "" "Go-http-client/1.1"
Oct 02 19:30:30 compute-0 nova_compute[194781]: 2025-10-02 19:30:30.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:30 compute-0 nova_compute[194781]: 2025-10-02 19:30:30.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:31 compute-0 nova_compute[194781]: 2025-10-02 19:30:31.208 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:30:31 compute-0 nova_compute[194781]: 2025-10-02 19:30:31.209 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:30:31 compute-0 openstack_network_exporter[211160]: ERROR   19:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:30:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:30:31 compute-0 openstack_network_exporter[211160]: ERROR   19:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:30:31 compute-0 openstack_network_exporter[211160]: ERROR   19:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:30:31 compute-0 openstack_network_exporter[211160]: ERROR   19:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:30:31 compute-0 openstack_network_exporter[211160]: ERROR   19:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:30:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:30:31 compute-0 nova_compute[194781]: 2025-10-02 19:30:31.519 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:30:31 compute-0 nova_compute[194781]: 2025-10-02 19:30:31.519 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:30:31 compute-0 nova_compute[194781]: 2025-10-02 19:30:31.520 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:30:34 compute-0 nova_compute[194781]: 2025-10-02 19:30:34.605 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Updating instance_info_cache with network_info: [{"id": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "address": "fa:16:3e:6d:6b:b2", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47329f1e-0e", "ovs_interfaceid": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:30:34 compute-0 nova_compute[194781]: 2025-10-02 19:30:34.627 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:30:34 compute-0 nova_compute[194781]: 2025-10-02 19:30:34.628 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:30:35 compute-0 nova_compute[194781]: 2025-10-02 19:30:35.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:30:35 compute-0 nova_compute[194781]: 2025-10-02 19:30:35.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 19:30:35 compute-0 nova_compute[194781]: 2025-10-02 19:30:35.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:35 compute-0 nova_compute[194781]: 2025-10-02 19:30:35.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:37 compute-0 nova_compute[194781]: 2025-10-02 19:30:37.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:30:37 compute-0 podman[252088]: 2025-10-02 19:30:37.688724156 +0000 UTC m=+0.061289616 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:30:40 compute-0 nova_compute[194781]: 2025-10-02 19:30:40.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:40 compute-0 nova_compute[194781]: 2025-10-02 19:30:40.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:45 compute-0 nova_compute[194781]: 2025-10-02 19:30:45.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:45 compute-0 nova_compute[194781]: 2025-10-02 19:30:45.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:45 compute-0 podman[252112]: 2025-10-02 19:30:45.704271818 +0000 UTC m=+0.079412600 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:30:45 compute-0 podman[252113]: 2025-10-02 19:30:45.735598789 +0000 UTC m=+0.105125574 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Oct 02 19:30:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:30:47.465 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:30:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:30:47.467 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:30:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:30:47.468 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:30:48 compute-0 podman[252149]: 2025-10-02 19:30:48.723770486 +0000 UTC m=+0.087666566 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001)
Oct 02 19:30:48 compute-0 podman[252148]: 2025-10-02 19:30:48.734390514 +0000 UTC m=+0.096752924 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, version=9.4, io.buildah.version=1.29.0, distribution-scope=public, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., name=ubi9, vendor=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct 02 19:30:48 compute-0 podman[252147]: 2025-10-02 19:30:48.744333995 +0000 UTC m=+0.104825956 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=edpm, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 19:30:50 compute-0 nova_compute[194781]: 2025-10-02 19:30:50.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:50 compute-0 nova_compute[194781]: 2025-10-02 19:30:50.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:55 compute-0 nova_compute[194781]: 2025-10-02 19:30:55.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:55 compute-0 nova_compute[194781]: 2025-10-02 19:30:55.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:30:55 compute-0 podman[252204]: 2025-10-02 19:30:55.726699898 +0000 UTC m=+0.100127882 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:30:55 compute-0 podman[252205]: 2025-10-02 19:30:55.737310446 +0000 UTC m=+0.093444968 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 02 19:30:59 compute-0 podman[252243]: 2025-10-02 19:30:59.704566889 +0000 UTC m=+0.074670256 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 19:30:59 compute-0 podman[209015]: time="2025-10-02T19:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:30:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:30:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5207 "" "Go-http-client/1.1"
Oct 02 19:30:59 compute-0 podman[252244]: 2025-10-02 19:30:59.783840575 +0000 UTC m=+0.150695367 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:31:00 compute-0 nova_compute[194781]: 2025-10-02 19:31:00.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:00 compute-0 nova_compute[194781]: 2025-10-02 19:31:00.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:01 compute-0 openstack_network_exporter[211160]: ERROR   19:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:31:01 compute-0 openstack_network_exporter[211160]: ERROR   19:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:31:01 compute-0 openstack_network_exporter[211160]: ERROR   19:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:31:01 compute-0 openstack_network_exporter[211160]: ERROR   19:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:31:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:31:01 compute-0 openstack_network_exporter[211160]: ERROR   19:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:31:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:31:05 compute-0 nova_compute[194781]: 2025-10-02 19:31:05.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:05 compute-0 nova_compute[194781]: 2025-10-02 19:31:05.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:07 compute-0 nova_compute[194781]: 2025-10-02 19:31:07.165 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:31:07 compute-0 nova_compute[194781]: 2025-10-02 19:31:07.197 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Triggering sync for uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 19:31:07 compute-0 nova_compute[194781]: 2025-10-02 19:31:07.198 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Triggering sync for uuid defe27ca-18ff-45c1-a96c-13a1d0d76474 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 19:31:07 compute-0 nova_compute[194781]: 2025-10-02 19:31:07.198 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Triggering sync for uuid 1399f3a8-2c63-4b73-b015-f96a55b3d59f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 19:31:07 compute-0 nova_compute[194781]: 2025-10-02 19:31:07.199 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:07 compute-0 nova_compute[194781]: 2025-10-02 19:31:07.200 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:07 compute-0 nova_compute[194781]: 2025-10-02 19:31:07.201 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "defe27ca-18ff-45c1-a96c-13a1d0d76474" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:07 compute-0 nova_compute[194781]: 2025-10-02 19:31:07.202 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:07 compute-0 nova_compute[194781]: 2025-10-02 19:31:07.202 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:07 compute-0 nova_compute[194781]: 2025-10-02 19:31:07.203 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:07 compute-0 nova_compute[194781]: 2025-10-02 19:31:07.238 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:07 compute-0 nova_compute[194781]: 2025-10-02 19:31:07.251 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:07 compute-0 nova_compute[194781]: 2025-10-02 19:31:07.252 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:08 compute-0 podman[252284]: 2025-10-02 19:31:08.742628708 +0000 UTC m=+0.109257922 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:31:10 compute-0 nova_compute[194781]: 2025-10-02 19:31:10.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:10 compute-0 nova_compute[194781]: 2025-10-02 19:31:10.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.942 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.942 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.942 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:12.999 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.006 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'defe27ca-18ff-45c1-a96c-13a1d0d76474', 'name': 'vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {'metering.server_group': '1264e536-3255-4eb3-9284-12888e889ce8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.011 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.016 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1399f3a8-2c63-4b73-b015-f96a55b3d59f', 'name': 'vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {'metering.server_group': '1264e536-3255-4eb3-9284-12888e889ce8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.016 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.016 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.017 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:31:13.017398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.053 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/cpu volume: 37660000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.089 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 41990000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.125 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/cpu volume: 34720000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.127 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.127 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.128 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.128 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.129 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:31:13.128559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.130 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.130 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.131 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.132 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.133 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.133 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.133 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.134 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:31:13.133735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.133 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.140 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.145 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.149 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.149 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.149 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.150 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.150 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.150 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.150 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.150 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.bytes volume: 1954 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.150 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.151 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.bytes volume: 1870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.151 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.152 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.152 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.152 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.152 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.152 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.153 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.153 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.153 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.154 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:31:13.150420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.154 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:31:13.152806) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.154 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.154 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.154 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.155 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.155 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:31:13.155012) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.155 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.155 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.156 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.156 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.156 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.157 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:31:13.157133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.157 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.158 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.158 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.158 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.158 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.159 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.159 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.159 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.159 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:31:13.159480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.219 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.219 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.220 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.284 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.284 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.284 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.332 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.332 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.332 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.333 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.333 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.334 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.334 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.334 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.335 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.335 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.335 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.335 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.336 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:31:13.333936) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:31:13.335398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:31:13.337138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.364 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.365 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.365 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.393 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.394 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.394 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.422 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.423 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.423 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.424 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.424 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.424 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.424 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.425 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.425 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.latency volume: 864994696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.425 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.latency volume: 104660889 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.426 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.latency volume: 104208362 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.426 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:31:13.425142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.426 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.427 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.427 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.latency volume: 670468778 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.427 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.latency volume: 113543433 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.428 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.latency volume: 206559376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.428 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.429 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.429 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.429 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.429 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.429 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.430 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.430 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.430 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.431 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.430 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:31:13.430432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.431 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.431 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.432 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.432 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.432 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.433 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.433 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.433 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.434 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.435 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:31:13.435035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.435 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.436 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.436 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.437 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.437 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.437 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.437 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:31:13.437473) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.438 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.438 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.438 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.439 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.439 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.439 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.440 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.440 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.441 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.441 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.441 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.441 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.442 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.442 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.442 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.442 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:31:13.442226) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.443 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.443 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.443 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.444 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.444 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.444 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.445 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.445 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.445 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.446 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.446 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.446 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.446 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.447 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.447 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.latency volume: 2502666553 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.447 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:31:13.447156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.447 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.latency volume: 10231196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.448 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.448 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.448 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.449 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.449 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.latency volume: 3305858753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.449 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.latency volume: 11917091 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.450 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.450 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.451 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.451 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.451 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.451 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.451 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.452 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.452 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:31:13.451898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.452 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.453 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.453 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.453 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.454 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.454 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.454 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.455 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.455 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.456 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.456 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:31:13.456893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.457 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.457 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.458 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.458 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.458 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.459 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.459 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.459 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.460 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.460 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.461 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.461 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.461 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.461 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.462 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.462 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:31:13.461892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.462 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.463 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.463 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.463 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.464 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.464 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.464 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.464 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.465 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:31:13.464764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.465 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.466 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:31:13.466692) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.467 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.467 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.468 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.468 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.468 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.469 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:31:13.468907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.469 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.470 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.470 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.471 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.471 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.471 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.471 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.472 14 DEBUG ceilometer.compute.pollsters [-] defe27ca-18ff-45c1-a96c-13a1d0d76474/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:31:13.471855) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.472 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.473 14 DEBUG ceilometer.compute.pollsters [-] 1399f3a8-2c63-4b73-b015-f96a55b3d59f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.473 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.475 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.475 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.475 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.476 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.476 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.476 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.476 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.477 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.477 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.477 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.478 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.478 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.478 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.478 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.479 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.480 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.480 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.480 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.480 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.481 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.481 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.481 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.481 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:31:13.481 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:31:15 compute-0 nova_compute[194781]: 2025-10-02 19:31:15.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:15 compute-0 nova_compute[194781]: 2025-10-02 19:31:15.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:16 compute-0 podman[252309]: 2025-10-02 19:31:16.724901155 +0000 UTC m=+0.090400468 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Oct 02 19:31:16 compute-0 podman[252308]: 2025-10-02 19:31:16.740735619 +0000 UTC m=+0.120516116 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Oct 02 19:31:19 compute-0 podman[252347]: 2025-10-02 19:31:19.726231827 +0000 UTC m=+0.087488122 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, release=1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler, managed_by=edpm_ansible, com.redhat.component=ubi9-container, name=ubi9)
Oct 02 19:31:19 compute-0 podman[252346]: 2025-10-02 19:31:19.736741952 +0000 UTC m=+0.105277108 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, io.openshift.expose-services=, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, release=1755695350, config_id=edpm)
Oct 02 19:31:19 compute-0 podman[252348]: 2025-10-02 19:31:19.748611603 +0000 UTC m=+0.110831723 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 19:31:20 compute-0 nova_compute[194781]: 2025-10-02 19:31:20.071 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:31:20 compute-0 nova_compute[194781]: 2025-10-02 19:31:20.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:20 compute-0 nova_compute[194781]: 2025-10-02 19:31:20.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:22 compute-0 nova_compute[194781]: 2025-10-02 19:31:22.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:31:23 compute-0 nova_compute[194781]: 2025-10-02 19:31:23.032 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:31:23 compute-0 nova_compute[194781]: 2025-10-02 19:31:23.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:31:24 compute-0 nova_compute[194781]: 2025-10-02 19:31:24.030 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:31:24 compute-0 nova_compute[194781]: 2025-10-02 19:31:24.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:31:24 compute-0 nova_compute[194781]: 2025-10-02 19:31:24.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:31:25 compute-0 nova_compute[194781]: 2025-10-02 19:31:25.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:25 compute-0 nova_compute[194781]: 2025-10-02 19:31:25.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.086 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.086 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.087 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.087 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.192 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.253 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.255 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.330 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.332 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.400 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.401 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.460 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.468 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.528 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.530 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.593 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.595 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.658 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.659 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:26 compute-0 podman[252421]: 2025-10-02 19:31:26.71543516 +0000 UTC m=+0.084601266 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 02 19:31:26 compute-0 podman[252419]: 2025-10-02 19:31:26.715629195 +0000 UTC m=+0.077084289 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.740 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.747 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.810 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.811 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.876 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.877 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.938 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:26 compute-0 nova_compute[194781]: 2025-10-02 19:31:26.939 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:31:27 compute-0 nova_compute[194781]: 2025-10-02 19:31:27.004 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:31:27 compute-0 nova_compute[194781]: 2025-10-02 19:31:27.353 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:31:27 compute-0 nova_compute[194781]: 2025-10-02 19:31:27.355 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4766MB free_disk=72.46216201782227GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:31:27 compute-0 nova_compute[194781]: 2025-10-02 19:31:27.356 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:27 compute-0 nova_compute[194781]: 2025-10-02 19:31:27.356 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:27 compute-0 nova_compute[194781]: 2025-10-02 19:31:27.491 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:31:27 compute-0 nova_compute[194781]: 2025-10-02 19:31:27.491 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance defe27ca-18ff-45c1-a96c-13a1d0d76474 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:31:27 compute-0 nova_compute[194781]: 2025-10-02 19:31:27.491 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 1399f3a8-2c63-4b73-b015-f96a55b3d59f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:31:27 compute-0 nova_compute[194781]: 2025-10-02 19:31:27.492 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:31:27 compute-0 nova_compute[194781]: 2025-10-02 19:31:27.492 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:31:27 compute-0 nova_compute[194781]: 2025-10-02 19:31:27.579 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:31:27 compute-0 nova_compute[194781]: 2025-10-02 19:31:27.603 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:31:27 compute-0 nova_compute[194781]: 2025-10-02 19:31:27.606 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:31:27 compute-0 nova_compute[194781]: 2025-10-02 19:31:27.606 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:29 compute-0 podman[209015]: time="2025-10-02T19:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:31:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:31:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5206 "" "Go-http-client/1.1"
Oct 02 19:31:30 compute-0 nova_compute[194781]: 2025-10-02 19:31:30.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:30 compute-0 nova_compute[194781]: 2025-10-02 19:31:30.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:30 compute-0 podman[252477]: 2025-10-02 19:31:30.702566213 +0000 UTC m=+0.072657283 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Oct 02 19:31:30 compute-0 podman[252478]: 2025-10-02 19:31:30.759037602 +0000 UTC m=+0.134554894 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:31:31 compute-0 openstack_network_exporter[211160]: ERROR   19:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:31:31 compute-0 openstack_network_exporter[211160]: ERROR   19:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:31:31 compute-0 openstack_network_exporter[211160]: ERROR   19:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:31:31 compute-0 openstack_network_exporter[211160]: ERROR   19:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:31:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:31:31 compute-0 openstack_network_exporter[211160]: ERROR   19:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:31:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:31:32 compute-0 nova_compute[194781]: 2025-10-02 19:31:32.608 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:31:32 compute-0 nova_compute[194781]: 2025-10-02 19:31:32.609 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:31:32 compute-0 nova_compute[194781]: 2025-10-02 19:31:32.830 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-1399f3a8-2c63-4b73-b015-f96a55b3d59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:31:32 compute-0 nova_compute[194781]: 2025-10-02 19:31:32.831 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-1399f3a8-2c63-4b73-b015-f96a55b3d59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:31:32 compute-0 nova_compute[194781]: 2025-10-02 19:31:32.831 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:31:34 compute-0 nova_compute[194781]: 2025-10-02 19:31:34.122 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Updating instance_info_cache with network_info: [{"id": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "address": "fa:16:3e:b4:e2:ba", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d3b6f60-e6", "ovs_interfaceid": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:31:34 compute-0 nova_compute[194781]: 2025-10-02 19:31:34.149 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-1399f3a8-2c63-4b73-b015-f96a55b3d59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:31:34 compute-0 nova_compute[194781]: 2025-10-02 19:31:34.150 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:31:35 compute-0 nova_compute[194781]: 2025-10-02 19:31:35.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:35 compute-0 nova_compute[194781]: 2025-10-02 19:31:35.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:37 compute-0 nova_compute[194781]: 2025-10-02 19:31:37.567 2 DEBUG nova.compute.manager [req-577ab54a-0062-4aa4-ba1a-1eb0b19d2410 req-34ba849c-656d-4111-8c0d-59342a116a72 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Received event network-changed-47329f1e-0ecb-476e-841d-aff3f14a7fcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:31:37 compute-0 nova_compute[194781]: 2025-10-02 19:31:37.568 2 DEBUG nova.compute.manager [req-577ab54a-0062-4aa4-ba1a-1eb0b19d2410 req-34ba849c-656d-4111-8c0d-59342a116a72 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Refreshing instance network info cache due to event network-changed-47329f1e-0ecb-476e-841d-aff3f14a7fcc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:31:37 compute-0 nova_compute[194781]: 2025-10-02 19:31:37.568 2 DEBUG oslo_concurrency.lockutils [req-577ab54a-0062-4aa4-ba1a-1eb0b19d2410 req-34ba849c-656d-4111-8c0d-59342a116a72 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:31:37 compute-0 nova_compute[194781]: 2025-10-02 19:31:37.569 2 DEBUG oslo_concurrency.lockutils [req-577ab54a-0062-4aa4-ba1a-1eb0b19d2410 req-34ba849c-656d-4111-8c0d-59342a116a72 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:31:37 compute-0 nova_compute[194781]: 2025-10-02 19:31:37.570 2 DEBUG nova.network.neutron [req-577ab54a-0062-4aa4-ba1a-1eb0b19d2410 req-34ba849c-656d-4111-8c0d-59342a116a72 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Refreshing network info cache for port 47329f1e-0ecb-476e-841d-aff3f14a7fcc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.399 2 DEBUG oslo_concurrency.lockutils [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "defe27ca-18ff-45c1-a96c-13a1d0d76474" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.399 2 DEBUG oslo_concurrency.lockutils [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.400 2 DEBUG oslo_concurrency.lockutils [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.400 2 DEBUG oslo_concurrency.lockutils [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.400 2 DEBUG oslo_concurrency.lockutils [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.402 2 INFO nova.compute.manager [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Terminating instance
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.403 2 DEBUG nova.compute.manager [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:31:38 compute-0 kernel: tap47329f1e-0e (unregistering): left promiscuous mode
Oct 02 19:31:38 compute-0 NetworkManager[52324]: <info>  [1759433498.4437] device (tap47329f1e-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:38 compute-0 ovn_controller[97052]: 2025-10-02T19:31:38Z|00054|binding|INFO|Releasing lport 47329f1e-0ecb-476e-841d-aff3f14a7fcc from this chassis (sb_readonly=0)
Oct 02 19:31:38 compute-0 ovn_controller[97052]: 2025-10-02T19:31:38Z|00055|binding|INFO|Setting lport 47329f1e-0ecb-476e-841d-aff3f14a7fcc down in Southbound
Oct 02 19:31:38 compute-0 ovn_controller[97052]: 2025-10-02T19:31:38Z|00056|binding|INFO|Removing iface tap47329f1e-0e ovn-installed in OVS
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:38 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 02 19:31:38 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 31.203s CPU time.
Oct 02 19:31:38 compute-0 systemd-machined[154795]: Machine qemu-3-instance-00000003 terminated.
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.549 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:6b:b2 192.168.0.44'], port_security=['fa:16:3e:6d:6b:b2 192.168.0.44'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-kewzjvdnt5lz-xlxy3mith77z-2ybijppocvxs-port-kfbnvyepymmq', 'neutron:cidrs': '192.168.0.44/24', 'neutron:device_id': 'defe27ca-18ff-45c1-a96c-13a1d0d76474', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5760fda-9195-4e68-8506-4362bf1edf4f', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-kewzjvdnt5lz-xlxy3mith77z-2ybijppocvxs-port-kfbnvyepymmq', 'neutron:project_id': 'c6bd7784161a4cc3a2e8715feee92228', 'neutron:revision_number': '4', 'neutron:security_group_ids': '72aaa87c-2798-4a9c-ab16-34693e3fe341', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21963977-c089-41a8-8d06-e659a781ceff, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=47329f1e-0ecb-476e-841d-aff3f14a7fcc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.551 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 47329f1e-0ecb-476e-841d-aff3f14a7fcc in datapath b5760fda-9195-4e68-8506-4362bf1edf4f unbound from our chassis
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.552 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5760fda-9195-4e68-8506-4362bf1edf4f
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.569 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[97bfe3fa-9a38-4164-bdfb-bf5a9b0e9405]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.578 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.610 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[b74adaa4-6fca-4c30-b58a-1efd9aa775bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.615 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[4b5e992f-6a5e-45a1-b5e4-7436524d7557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.652 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[ba5900d6-c532-47f1-a584-3fc9417b823a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.669 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9d8c35-944a-4278-812d-a7ef05f61dff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5760fda-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:0b:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 832, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 832, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394420, 'reachable_time': 25007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252546, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.687 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[67944bc9-6793-41f8-96ba-4f50bf70c3df]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapb5760fda-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394432, 'tstamp': 394432}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252551, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5760fda-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394434, 'tstamp': 394434}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252551, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.688 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5760fda-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.697 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5760fda-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.697 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.698 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5760fda-90, col_values=(('external_ids', {'iface-id': '8a91c2ef-c369-46ce-8154-e9505f04ef0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.698 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:31:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:38.699 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.706 2 INFO nova.virt.libvirt.driver [-] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Instance destroyed successfully.
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.707 2 DEBUG nova.objects.instance [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lazy-loading 'resources' on Instance uuid defe27ca-18ff-45c1-a96c-13a1d0d76474 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.743 2 DEBUG nova.virt.libvirt.vif [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:23:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vdnt5lz-xlxy3mith77z-2ybijppocvxs-vnf-4npwtr46h3tu',id=3,image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:23:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1264e536-3255-4eb3-9284-12888e889ce8'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6bd7784161a4cc3a2e8715feee92228',ramdisk_id='',reservation_id='r-k0vw4q79',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:23:58Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00NzE1MDI0MzQ4MjQyODM4OTI0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQ3MTUwMjQzNDgyNDI4Mzg5MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDcxNTAyNDM0ODI0MjgzODkyND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQ3MTUwMjQzNDgyNDI4Mzg5MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00NzE1MDI0MzQ4MjQyODM4OTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00NzE1MDI0MzQ4MjQyODM4OTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1
Oct 02 19:31:38 compute-0 nova_compute[194781]: xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDcxNTAyNDM0ODI0MjgzODkyND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQ3MTUwMjQzNDgyNDI4Mzg5MjQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00NzE1MDI0MzQ4MjQyODM4OTI0PT0tLQo=',user_id='5e0565a40c4e40f9ab77ce190f9527c5',uuid=defe27ca-18ff-45c1-a96c-13a1d0d76474,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "address": "fa:16:3e:6d:6b:b2", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47329f1e-0e", "ovs_interfaceid": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.743 2 DEBUG nova.network.os_vif_util [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converting VIF {"id": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "address": "fa:16:3e:6d:6b:b2", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47329f1e-0e", "ovs_interfaceid": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.744 2 DEBUG nova.network.os_vif_util [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:6b:b2,bridge_name='br-int',has_traffic_filtering=True,id=47329f1e-0ecb-476e-841d-aff3f14a7fcc,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap47329f1e-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.745 2 DEBUG os_vif [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:6b:b2,bridge_name='br-int',has_traffic_filtering=True,id=47329f1e-0ecb-476e-841d-aff3f14a7fcc,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap47329f1e-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.748 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47329f1e-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.753 2 INFO os_vif [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:6b:b2,bridge_name='br-int',has_traffic_filtering=True,id=47329f1e-0ecb-476e-841d-aff3f14a7fcc,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap47329f1e-0e')
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.753 2 INFO nova.virt.libvirt.driver [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Deleting instance files /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474_del
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.754 2 INFO nova.virt.libvirt.driver [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Deletion of /var/lib/nova/instances/defe27ca-18ff-45c1-a96c-13a1d0d76474_del complete
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.823 2 DEBUG nova.network.neutron [req-577ab54a-0062-4aa4-ba1a-1eb0b19d2410 req-34ba849c-656d-4111-8c0d-59342a116a72 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Updated VIF entry in instance network info cache for port 47329f1e-0ecb-476e-841d-aff3f14a7fcc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.824 2 DEBUG nova.network.neutron [req-577ab54a-0062-4aa4-ba1a-1eb0b19d2410 req-34ba849c-656d-4111-8c0d-59342a116a72 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Updating instance_info_cache with network_info: [{"id": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "address": "fa:16:3e:6d:6b:b2", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47329f1e-0e", "ovs_interfaceid": "47329f1e-0ecb-476e-841d-aff3f14a7fcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.863 2 DEBUG oslo_concurrency.lockutils [req-577ab54a-0062-4aa4-ba1a-1eb0b19d2410 req-34ba849c-656d-4111-8c0d-59342a116a72 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-defe27ca-18ff-45c1-a96c-13a1d0d76474" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.865 2 INFO nova.compute.manager [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Took 0.46 seconds to destroy the instance on the hypervisor.
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.866 2 DEBUG oslo.service.loopingcall [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.867 2 DEBUG nova.compute.manager [-] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:31:38 compute-0 nova_compute[194781]: 2025-10-02 19:31:38.868 2 DEBUG nova.network.neutron [-] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:31:39 compute-0 nova_compute[194781]: 2025-10-02 19:31:39.006 2 DEBUG nova.compute.manager [req-cdf3a76f-382a-4fc0-b1a6-55b4ecc247bf req-6a0576cc-92bc-4e8f-9666-9c7d52cb18ff fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Received event network-vif-unplugged-47329f1e-0ecb-476e-841d-aff3f14a7fcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:31:39 compute-0 nova_compute[194781]: 2025-10-02 19:31:39.007 2 DEBUG oslo_concurrency.lockutils [req-cdf3a76f-382a-4fc0-b1a6-55b4ecc247bf req-6a0576cc-92bc-4e8f-9666-9c7d52cb18ff fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:39 compute-0 nova_compute[194781]: 2025-10-02 19:31:39.007 2 DEBUG oslo_concurrency.lockutils [req-cdf3a76f-382a-4fc0-b1a6-55b4ecc247bf req-6a0576cc-92bc-4e8f-9666-9c7d52cb18ff fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:39 compute-0 nova_compute[194781]: 2025-10-02 19:31:39.008 2 DEBUG oslo_concurrency.lockutils [req-cdf3a76f-382a-4fc0-b1a6-55b4ecc247bf req-6a0576cc-92bc-4e8f-9666-9c7d52cb18ff fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:39 compute-0 nova_compute[194781]: 2025-10-02 19:31:39.009 2 DEBUG nova.compute.manager [req-cdf3a76f-382a-4fc0-b1a6-55b4ecc247bf req-6a0576cc-92bc-4e8f-9666-9c7d52cb18ff fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] No waiting events found dispatching network-vif-unplugged-47329f1e-0ecb-476e-841d-aff3f14a7fcc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:31:39 compute-0 nova_compute[194781]: 2025-10-02 19:31:39.010 2 DEBUG nova.compute.manager [req-cdf3a76f-382a-4fc0-b1a6-55b4ecc247bf req-6a0576cc-92bc-4e8f-9666-9c7d52cb18ff fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Received event network-vif-unplugged-47329f1e-0ecb-476e-841d-aff3f14a7fcc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 19:31:39 compute-0 rsyslogd[243731]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:31:38.743 2 DEBUG nova.virt.libvirt.vif [None req-ab378435-75ca-4b [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:31:39 compute-0 podman[252557]: 2025-10-02 19:31:39.714575711 +0000 UTC m=+0.081612398 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:31:40 compute-0 nova_compute[194781]: 2025-10-02 19:31:40.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:40.700 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:31:40 compute-0 nova_compute[194781]: 2025-10-02 19:31:40.702 2 DEBUG nova.network.neutron [-] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:31:40 compute-0 nova_compute[194781]: 2025-10-02 19:31:40.725 2 INFO nova.compute.manager [-] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Took 1.86 seconds to deallocate network for instance.
Oct 02 19:31:40 compute-0 nova_compute[194781]: 2025-10-02 19:31:40.790 2 DEBUG oslo_concurrency.lockutils [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:40 compute-0 nova_compute[194781]: 2025-10-02 19:31:40.791 2 DEBUG oslo_concurrency.lockutils [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:40 compute-0 nova_compute[194781]: 2025-10-02 19:31:40.910 2 DEBUG nova.compute.provider_tree [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:31:40 compute-0 nova_compute[194781]: 2025-10-02 19:31:40.926 2 DEBUG nova.scheduler.client.report [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:31:40 compute-0 nova_compute[194781]: 2025-10-02 19:31:40.949 2 DEBUG oslo_concurrency.lockutils [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:40 compute-0 nova_compute[194781]: 2025-10-02 19:31:40.987 2 INFO nova.scheduler.client.report [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Deleted allocations for instance defe27ca-18ff-45c1-a96c-13a1d0d76474
Oct 02 19:31:41 compute-0 nova_compute[194781]: 2025-10-02 19:31:41.086 2 DEBUG oslo_concurrency.lockutils [None req-ab378435-75ca-4bc1-89d6-cbdfe6c3d20a 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:41 compute-0 nova_compute[194781]: 2025-10-02 19:31:41.123 2 DEBUG nova.compute.manager [req-7afd6170-d2f4-4351-bf3f-80b0c22c128d req-064907b3-7399-47d5-bf6e-f2069eef1f01 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Received event network-vif-plugged-47329f1e-0ecb-476e-841d-aff3f14a7fcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:31:41 compute-0 nova_compute[194781]: 2025-10-02 19:31:41.123 2 DEBUG oslo_concurrency.lockutils [req-7afd6170-d2f4-4351-bf3f-80b0c22c128d req-064907b3-7399-47d5-bf6e-f2069eef1f01 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:41 compute-0 nova_compute[194781]: 2025-10-02 19:31:41.124 2 DEBUG oslo_concurrency.lockutils [req-7afd6170-d2f4-4351-bf3f-80b0c22c128d req-064907b3-7399-47d5-bf6e-f2069eef1f01 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:41 compute-0 nova_compute[194781]: 2025-10-02 19:31:41.124 2 DEBUG oslo_concurrency.lockutils [req-7afd6170-d2f4-4351-bf3f-80b0c22c128d req-064907b3-7399-47d5-bf6e-f2069eef1f01 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "defe27ca-18ff-45c1-a96c-13a1d0d76474-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:41 compute-0 nova_compute[194781]: 2025-10-02 19:31:41.125 2 DEBUG nova.compute.manager [req-7afd6170-d2f4-4351-bf3f-80b0c22c128d req-064907b3-7399-47d5-bf6e-f2069eef1f01 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] No waiting events found dispatching network-vif-plugged-47329f1e-0ecb-476e-841d-aff3f14a7fcc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:31:41 compute-0 nova_compute[194781]: 2025-10-02 19:31:41.125 2 WARNING nova.compute.manager [req-7afd6170-d2f4-4351-bf3f-80b0c22c128d req-064907b3-7399-47d5-bf6e-f2069eef1f01 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Received unexpected event network-vif-plugged-47329f1e-0ecb-476e-841d-aff3f14a7fcc for instance with vm_state deleted and task_state None.
Oct 02 19:31:43 compute-0 nova_compute[194781]: 2025-10-02 19:31:43.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:45 compute-0 nova_compute[194781]: 2025-10-02 19:31:45.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:47.468 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:31:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:47.469 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:31:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:31:47.470 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:31:47 compute-0 podman[252584]: 2025-10-02 19:31:47.763807253 +0000 UTC m=+0.137030969 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:31:47 compute-0 podman[252583]: 2025-10-02 19:31:47.774924064 +0000 UTC m=+0.153477819 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:31:48 compute-0 nova_compute[194781]: 2025-10-02 19:31:48.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:50 compute-0 nova_compute[194781]: 2025-10-02 19:31:50.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:50 compute-0 podman[252622]: 2025-10-02 19:31:50.744301808 +0000 UTC m=+0.100608335 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 19:31:50 compute-0 podman[252621]: 2025-10-02 19:31:50.759156967 +0000 UTC m=+0.130986981 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=kepler, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, config_id=edpm, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543)
Oct 02 19:31:50 compute-0 podman[252620]: 2025-10-02 19:31:50.76653784 +0000 UTC m=+0.129903282 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350)
Oct 02 19:31:53 compute-0 nova_compute[194781]: 2025-10-02 19:31:53.702 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759433498.7019126, defe27ca-18ff-45c1-a96c-13a1d0d76474 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:31:53 compute-0 nova_compute[194781]: 2025-10-02 19:31:53.703 2 INFO nova.compute.manager [-] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] VM Stopped (Lifecycle Event)
Oct 02 19:31:53 compute-0 nova_compute[194781]: 2025-10-02 19:31:53.721 2 DEBUG nova.compute.manager [None req-ca92c946-c15a-4155-8be4-a6125e9cc2db - - - - - -] [instance: defe27ca-18ff-45c1-a96c-13a1d0d76474] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:31:53 compute-0 nova_compute[194781]: 2025-10-02 19:31:53.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:55 compute-0 nova_compute[194781]: 2025-10-02 19:31:55.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:57 compute-0 podman[252678]: 2025-10-02 19:31:57.713374888 +0000 UTC m=+0.079838741 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid)
Oct 02 19:31:57 compute-0 podman[252677]: 2025-10-02 19:31:57.728814922 +0000 UTC m=+0.089541745 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:31:58 compute-0 nova_compute[194781]: 2025-10-02 19:31:58.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:31:59 compute-0 podman[209015]: time="2025-10-02T19:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:31:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:31:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5208 "" "Go-http-client/1.1"
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.407 2 DEBUG nova.compute.manager [req-6aac74ba-3443-44aa-92f7-94c347b24a37 req-0a4c4210-50d1-4a70-9896-a8de839d9751 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Received event network-changed-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.408 2 DEBUG nova.compute.manager [req-6aac74ba-3443-44aa-92f7-94c347b24a37 req-0a4c4210-50d1-4a70-9896-a8de839d9751 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Refreshing instance network info cache due to event network-changed-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.408 2 DEBUG oslo_concurrency.lockutils [req-6aac74ba-3443-44aa-92f7-94c347b24a37 req-0a4c4210-50d1-4a70-9896-a8de839d9751 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-1399f3a8-2c63-4b73-b015-f96a55b3d59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.409 2 DEBUG oslo_concurrency.lockutils [req-6aac74ba-3443-44aa-92f7-94c347b24a37 req-0a4c4210-50d1-4a70-9896-a8de839d9751 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-1399f3a8-2c63-4b73-b015-f96a55b3d59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.409 2 DEBUG nova.network.neutron [req-6aac74ba-3443-44aa-92f7-94c347b24a37 req-0a4c4210-50d1-4a70-9896-a8de839d9751 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Refreshing network info cache for port 1d3b6f60-e6d6-492b-9cc3-b2355b1866fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.458 2 DEBUG oslo_concurrency.lockutils [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.458 2 DEBUG oslo_concurrency.lockutils [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.459 2 DEBUG oslo_concurrency.lockutils [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.459 2 DEBUG oslo_concurrency.lockutils [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.460 2 DEBUG oslo_concurrency.lockutils [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.462 2 INFO nova.compute.manager [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Terminating instance
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.463 2 DEBUG nova.compute.manager [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:32:00 compute-0 kernel: tap1d3b6f60-e6 (unregistering): left promiscuous mode
Oct 02 19:32:00 compute-0 NetworkManager[52324]: <info>  [1759433520.5124] device (tap1d3b6f60-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:32:00 compute-0 ovn_controller[97052]: 2025-10-02T19:32:00Z|00057|binding|INFO|Releasing lport 1d3b6f60-e6d6-492b-9cc3-b2355b1866fd from this chassis (sb_readonly=0)
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:00 compute-0 ovn_controller[97052]: 2025-10-02T19:32:00Z|00058|binding|INFO|Setting lport 1d3b6f60-e6d6-492b-9cc3-b2355b1866fd down in Southbound
Oct 02 19:32:00 compute-0 ovn_controller[97052]: 2025-10-02T19:32:00Z|00059|binding|INFO|Removing iface tap1d3b6f60-e6 ovn-installed in OVS
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:00 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:00.537 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:e2:ba 192.168.0.10'], port_security=['fa:16:3e:b4:e2:ba 192.168.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-kewzjvdnt5lz-vt5t5337qak7-rqvxszkro6gs-port-fzv7g3jzrfep', 'neutron:cidrs': '192.168.0.10/24', 'neutron:device_id': '1399f3a8-2c63-4b73-b015-f96a55b3d59f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5760fda-9195-4e68-8506-4362bf1edf4f', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-kewzjvdnt5lz-vt5t5337qak7-rqvxszkro6gs-port-fzv7g3jzrfep', 'neutron:project_id': 'c6bd7784161a4cc3a2e8715feee92228', 'neutron:revision_number': '4', 'neutron:security_group_ids': '72aaa87c-2798-4a9c-ab16-34693e3fe341', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21963977-c089-41a8-8d06-e659a781ceff, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=1d3b6f60-e6d6-492b-9cc3-b2355b1866fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:32:00 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:00.539 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 1d3b6f60-e6d6-492b-9cc3-b2355b1866fd in datapath b5760fda-9195-4e68-8506-4362bf1edf4f unbound from our chassis
Oct 02 19:32:00 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:00.540 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5760fda-9195-4e68-8506-4362bf1edf4f
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:00 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:00.567 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[d55e74d1-582b-4e53-aa70-5edae82df5ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:32:00 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 02 19:32:00 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 18.170s CPU time.
Oct 02 19:32:00 compute-0 systemd-machined[154795]: Machine qemu-4-instance-00000004 terminated.
Oct 02 19:32:00 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:00.605 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[e303ef9b-720d-4dd3-a75a-9a51c3bcb673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:32:00 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:00.609 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[1925754e-861f-4bfd-8797-700c3f836bfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:32:00 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:00.646 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[c19d634a-d0ac-4fef-8096-43dec68dc1d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:32:00 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:00.670 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[192f7017-a5b6-4f04-a3e8-cfc1087d9625]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5760fda-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:0b:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 832, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 832, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394420, 'reachable_time': 40274, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252731, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:32:00 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:00.692 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[cee526dc-ffdc-4215-9f03-640f79aad581]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapb5760fda-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394432, 'tstamp': 394432}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252733, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5760fda-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394434, 'tstamp': 394434}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252733, 'error': None, 'target': 'ovnmeta-b5760fda-9195-4e68-8506-4362bf1edf4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:32:00 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:00.694 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5760fda-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:00 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:00.707 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5760fda-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:32:00 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:00.708 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:32:00 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:00.710 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5760fda-90, col_values=(('external_ids', {'iface-id': '8a91c2ef-c369-46ce-8154-e9505f04ef0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:32:00 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:00.711 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.757 2 INFO nova.virt.libvirt.driver [-] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Instance destroyed successfully.
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.758 2 DEBUG nova.objects.instance [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lazy-loading 'resources' on Instance uuid 1399f3a8-2c63-4b73-b015-f96a55b3d59f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.770 2 DEBUG nova.virt.libvirt.vif [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-vdnt5lz-vt5t5337qak7-rqvxszkro6gs-vnf-jbxxakm6ngcp',id=4,image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:25:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='1264e536-3255-4eb3-9284-12888e889ce8'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6bd7784161a4cc3a2e8715feee92228',ramdisk_id='',reservation_id='r-2h7q92tb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='2c6780ee-8ca6-4dab-831c-c89907768547',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:25:57Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03NzA2MzUxNDUxMDUyNjE0NTg0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc3MDYzNTE0NTEwNTI2MTQ1ODQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzcwNjM1MTQ1MTA1MjYxNDU4ND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc3MDYzNTE0NTEwNTI2MTQ1ODQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03NzA2MzUxNDUxMDUyNjE0NTg0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03NzA2MzUxNDUxMDUyNjE0NTg0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1
Oct 02 19:32:00 compute-0 nova_compute[194781]: xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzcwNjM1MTQ1MTA1MjYxNDU4ND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc3MDYzNTE0NTEwNTI2MTQ1ODQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03NzA2MzUxNDUxMDUyNjE0NTg0PT0tLQo=',user_id='5e0565a40c4e40f9ab77ce190f9527c5',uuid=1399f3a8-2c63-4b73-b015-f96a55b3d59f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "address": "fa:16:3e:b4:e2:ba", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d3b6f60-e6", "ovs_interfaceid": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.771 2 DEBUG nova.network.os_vif_util [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converting VIF {"id": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "address": "fa:16:3e:b4:e2:ba", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d3b6f60-e6", "ovs_interfaceid": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.771 2 DEBUG nova.network.os_vif_util [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:e2:ba,bridge_name='br-int',has_traffic_filtering=True,id=1d3b6f60-e6d6-492b-9cc3-b2355b1866fd,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d3b6f60-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.772 2 DEBUG os_vif [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:e2:ba,bridge_name='br-int',has_traffic_filtering=True,id=1d3b6f60-e6d6-492b-9cc3-b2355b1866fd,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d3b6f60-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.773 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d3b6f60-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.779 2 INFO os_vif [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:e2:ba,bridge_name='br-int',has_traffic_filtering=True,id=1d3b6f60-e6d6-492b-9cc3-b2355b1866fd,network=Network(b5760fda-9195-4e68-8506-4362bf1edf4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d3b6f60-e6')
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.780 2 INFO nova.virt.libvirt.driver [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Deleting instance files /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f_del
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.781 2 INFO nova.virt.libvirt.driver [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Deletion of /var/lib/nova/instances/1399f3a8-2c63-4b73-b015-f96a55b3d59f_del complete
Oct 02 19:32:00 compute-0 podman[252750]: 2025-10-02 19:32:00.834194436 +0000 UTC m=+0.076110884 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.892 2 INFO nova.compute.manager [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Took 0.43 seconds to destroy the instance on the hypervisor.
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.893 2 DEBUG oslo.service.loopingcall [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.893 2 DEBUG nova.compute.manager [-] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:32:00 compute-0 nova_compute[194781]: 2025-10-02 19:32:00.894 2 DEBUG nova.network.neutron [-] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:32:01 compute-0 podman[252773]: 2025-10-02 19:32:01.039587363 +0000 UTC m=+0.163655166 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:32:01 compute-0 rsyslogd[243731]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 19:32:00.770 2 DEBUG nova.virt.libvirt.vif [None req-32cc9910-64fb-4e [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 19:32:01 compute-0 openstack_network_exporter[211160]: ERROR   19:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:32:01 compute-0 openstack_network_exporter[211160]: ERROR   19:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:32:01 compute-0 openstack_network_exporter[211160]: ERROR   19:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:32:01 compute-0 openstack_network_exporter[211160]: ERROR   19:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:32:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:32:01 compute-0 openstack_network_exporter[211160]: ERROR   19:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:32:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:32:01 compute-0 nova_compute[194781]: 2025-10-02 19:32:01.976 2 DEBUG nova.network.neutron [req-6aac74ba-3443-44aa-92f7-94c347b24a37 req-0a4c4210-50d1-4a70-9896-a8de839d9751 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Updated VIF entry in instance network info cache for port 1d3b6f60-e6d6-492b-9cc3-b2355b1866fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:32:01 compute-0 nova_compute[194781]: 2025-10-02 19:32:01.977 2 DEBUG nova.network.neutron [req-6aac74ba-3443-44aa-92f7-94c347b24a37 req-0a4c4210-50d1-4a70-9896-a8de839d9751 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Updating instance_info_cache with network_info: [{"id": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "address": "fa:16:3e:b4:e2:ba", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d3b6f60-e6", "ovs_interfaceid": "1d3b6f60-e6d6-492b-9cc3-b2355b1866fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.008 2 DEBUG oslo_concurrency.lockutils [req-6aac74ba-3443-44aa-92f7-94c347b24a37 req-0a4c4210-50d1-4a70-9896-a8de839d9751 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-1399f3a8-2c63-4b73-b015-f96a55b3d59f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.218 2 DEBUG nova.network.neutron [-] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.246 2 INFO nova.compute.manager [-] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Took 1.35 seconds to deallocate network for instance.
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.294 2 DEBUG oslo_concurrency.lockutils [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.294 2 DEBUG oslo_concurrency.lockutils [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.391 2 DEBUG nova.compute.provider_tree [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.405 2 DEBUG nova.scheduler.client.report [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.435 2 DEBUG oslo_concurrency.lockutils [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.474 2 INFO nova.scheduler.client.report [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Deleted allocations for instance 1399f3a8-2c63-4b73-b015-f96a55b3d59f
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.516 2 DEBUG nova.compute.manager [req-db2c9761-fccd-401f-8729-af6a4c38ebce req-c07e534b-7755-4418-9575-7a28889af028 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Received event network-vif-unplugged-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.516 2 DEBUG oslo_concurrency.lockutils [req-db2c9761-fccd-401f-8729-af6a4c38ebce req-c07e534b-7755-4418-9575-7a28889af028 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.516 2 DEBUG oslo_concurrency.lockutils [req-db2c9761-fccd-401f-8729-af6a4c38ebce req-c07e534b-7755-4418-9575-7a28889af028 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.516 2 DEBUG oslo_concurrency.lockutils [req-db2c9761-fccd-401f-8729-af6a4c38ebce req-c07e534b-7755-4418-9575-7a28889af028 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.516 2 DEBUG nova.compute.manager [req-db2c9761-fccd-401f-8729-af6a4c38ebce req-c07e534b-7755-4418-9575-7a28889af028 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] No waiting events found dispatching network-vif-unplugged-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.517 2 WARNING nova.compute.manager [req-db2c9761-fccd-401f-8729-af6a4c38ebce req-c07e534b-7755-4418-9575-7a28889af028 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Received unexpected event network-vif-unplugged-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd for instance with vm_state deleted and task_state None.
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.517 2 DEBUG nova.compute.manager [req-db2c9761-fccd-401f-8729-af6a4c38ebce req-c07e534b-7755-4418-9575-7a28889af028 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Received event network-vif-plugged-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.517 2 DEBUG oslo_concurrency.lockutils [req-db2c9761-fccd-401f-8729-af6a4c38ebce req-c07e534b-7755-4418-9575-7a28889af028 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.517 2 DEBUG oslo_concurrency.lockutils [req-db2c9761-fccd-401f-8729-af6a4c38ebce req-c07e534b-7755-4418-9575-7a28889af028 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.517 2 DEBUG oslo_concurrency.lockutils [req-db2c9761-fccd-401f-8729-af6a4c38ebce req-c07e534b-7755-4418-9575-7a28889af028 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.518 2 DEBUG nova.compute.manager [req-db2c9761-fccd-401f-8729-af6a4c38ebce req-c07e534b-7755-4418-9575-7a28889af028 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] No waiting events found dispatching network-vif-plugged-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.518 2 WARNING nova.compute.manager [req-db2c9761-fccd-401f-8729-af6a4c38ebce req-c07e534b-7755-4418-9575-7a28889af028 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Received unexpected event network-vif-plugged-1d3b6f60-e6d6-492b-9cc3-b2355b1866fd for instance with vm_state deleted and task_state None.
Oct 02 19:32:02 compute-0 nova_compute[194781]: 2025-10-02 19:32:02.556 2 DEBUG oslo_concurrency.lockutils [None req-32cc9910-64fb-4ed5-8191-38cc800c1edf 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "1399f3a8-2c63-4b73-b015-f96a55b3d59f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:05 compute-0 nova_compute[194781]: 2025-10-02 19:32:05.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:05 compute-0 nova_compute[194781]: 2025-10-02 19:32:05.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:10 compute-0 nova_compute[194781]: 2025-10-02 19:32:10.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:10 compute-0 podman[252798]: 2025-10-02 19:32:10.727496437 +0000 UTC m=+0.088089968 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:32:10 compute-0 nova_compute[194781]: 2025-10-02 19:32:10.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:15 compute-0 nova_compute[194781]: 2025-10-02 19:32:15.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:15 compute-0 nova_compute[194781]: 2025-10-02 19:32:15.755 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759433520.753927, 1399f3a8-2c63-4b73-b015-f96a55b3d59f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:32:15 compute-0 nova_compute[194781]: 2025-10-02 19:32:15.756 2 INFO nova.compute.manager [-] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] VM Stopped (Lifecycle Event)
Oct 02 19:32:15 compute-0 nova_compute[194781]: 2025-10-02 19:32:15.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:16 compute-0 nova_compute[194781]: 2025-10-02 19:32:16.040 2 DEBUG nova.compute.manager [None req-77943f95-0b90-4661-9757-ff237bb8d62a - - - - - -] [instance: 1399f3a8-2c63-4b73-b015-f96a55b3d59f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:32:18 compute-0 podman[252822]: 2025-10-02 19:32:18.720150524 +0000 UTC m=+0.078402084 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:32:18 compute-0 podman[252821]: 2025-10-02 19:32:18.746676369 +0000 UTC m=+0.108408540 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:32:20 compute-0 nova_compute[194781]: 2025-10-02 19:32:20.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:20 compute-0 nova_compute[194781]: 2025-10-02 19:32:20.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:21 compute-0 nova_compute[194781]: 2025-10-02 19:32:21.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:21 compute-0 podman[252860]: 2025-10-02 19:32:21.742321173 +0000 UTC m=+0.110273959 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, version=9.4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, name=ubi9, release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, vcs-type=git, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct 02 19:32:21 compute-0 podman[252861]: 2025-10-02 19:32:21.757611633 +0000 UTC m=+0.108621065 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd)
Oct 02 19:32:21 compute-0 podman[252859]: 2025-10-02 19:32:21.76474941 +0000 UTC m=+0.125285992 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 19:32:23 compute-0 nova_compute[194781]: 2025-10-02 19:32:23.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:24 compute-0 nova_compute[194781]: 2025-10-02 19:32:24.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:24 compute-0 nova_compute[194781]: 2025-10-02 19:32:24.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:24 compute-0 nova_compute[194781]: 2025-10-02 19:32:24.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:24 compute-0 nova_compute[194781]: 2025-10-02 19:32:24.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:32:25 compute-0 nova_compute[194781]: 2025-10-02 19:32:25.036 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:25 compute-0 nova_compute[194781]: 2025-10-02 19:32:25.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:25 compute-0 nova_compute[194781]: 2025-10-02 19:32:25.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.030 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.062 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.063 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.101 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.102 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.103 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.104 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.207 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.265 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.266 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.323 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.324 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.421 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.422 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.484 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.832 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.834 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5141MB free_disk=72.50714111328125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.834 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.835 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.930 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.931 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:32:27 compute-0 nova_compute[194781]: 2025-10-02 19:32:27.931 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:32:28 compute-0 nova_compute[194781]: 2025-10-02 19:32:28.010 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:32:28 compute-0 nova_compute[194781]: 2025-10-02 19:32:28.036 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:32:28 compute-0 nova_compute[194781]: 2025-10-02 19:32:28.066 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:32:28 compute-0 nova_compute[194781]: 2025-10-02 19:32:28.067 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:28 compute-0 podman[252926]: 2025-10-02 19:32:28.747257991 +0000 UTC m=+0.111171022 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:32:28 compute-0 podman[252927]: 2025-10-02 19:32:28.747837336 +0000 UTC m=+0.108844191 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:32:29 compute-0 sshd-session[252969]: Accepted publickey for zuul from 38.102.83.227 port 53228 ssh2: RSA SHA256:Cqypmgs6gPK5am/EoWoj7JixM3d03JX7hfQ1lfNOky8
Oct 02 19:32:29 compute-0 systemd-logind[798]: New session 31 of user zuul.
Oct 02 19:32:29 compute-0 systemd[1]: Started Session 31 of User zuul.
Oct 02 19:32:29 compute-0 sshd-session[252969]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:32:29 compute-0 podman[209015]: time="2025-10-02T19:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:32:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:32:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5208 "" "Go-http-client/1.1"
Oct 02 19:32:30 compute-0 sudo[253146]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmbdjqszxkfqmqvjerseyqiwyohxunkn ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433549.7496152-55259-77263380316742/AnsiballZ_command.py'
Oct 02 19:32:30 compute-0 sudo[253146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:32:30 compute-0 nova_compute[194781]: 2025-10-02 19:32:30.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:30 compute-0 python3[253148]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:32:30 compute-0 sudo[253146]: pam_unix(sudo:session): session closed for user root
Oct 02 19:32:30 compute-0 nova_compute[194781]: 2025-10-02 19:32:30.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:31 compute-0 openstack_network_exporter[211160]: ERROR   19:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:32:31 compute-0 openstack_network_exporter[211160]: ERROR   19:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:32:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:32:31 compute-0 openstack_network_exporter[211160]: ERROR   19:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:32:31 compute-0 openstack_network_exporter[211160]: ERROR   19:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:32:31 compute-0 openstack_network_exporter[211160]: ERROR   19:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:32:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:32:31 compute-0 podman[253187]: 2025-10-02 19:32:31.747036023 +0000 UTC m=+0.107650430 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:32:31 compute-0 podman[253188]: 2025-10-02 19:32:31.825522117 +0000 UTC m=+0.182778446 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:32:33 compute-0 nova_compute[194781]: 2025-10-02 19:32:33.038 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:32:33 compute-0 nova_compute[194781]: 2025-10-02 19:32:33.040 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:32:33 compute-0 nova_compute[194781]: 2025-10-02 19:32:33.040 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:32:33 compute-0 nova_compute[194781]: 2025-10-02 19:32:33.571 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:32:33 compute-0 nova_compute[194781]: 2025-10-02 19:32:33.571 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:32:33 compute-0 nova_compute[194781]: 2025-10-02 19:32:33.571 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:32:33 compute-0 nova_compute[194781]: 2025-10-02 19:32:33.572 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:32:34 compute-0 ovn_controller[97052]: 2025-10-02T19:32:34Z|00060|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Oct 02 19:32:34 compute-0 nova_compute[194781]: 2025-10-02 19:32:34.698 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:32:34 compute-0 nova_compute[194781]: 2025-10-02 19:32:34.715 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:32:34 compute-0 nova_compute[194781]: 2025-10-02 19:32:34.716 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:32:35 compute-0 nova_compute[194781]: 2025-10-02 19:32:35.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:35 compute-0 nova_compute[194781]: 2025-10-02 19:32:35.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:40 compute-0 nova_compute[194781]: 2025-10-02 19:32:40.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:40 compute-0 nova_compute[194781]: 2025-10-02 19:32:40.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:41 compute-0 podman[253232]: 2025-10-02 19:32:41.759311088 +0000 UTC m=+0.116947873 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:32:45 compute-0 nova_compute[194781]: 2025-10-02 19:32:45.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:45 compute-0 nova_compute[194781]: 2025-10-02 19:32:45.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:47 compute-0 nova_compute[194781]: 2025-10-02 19:32:47.290 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "563a4698-9f6f-4943-9653-401b25c49efc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:47 compute-0 nova_compute[194781]: 2025-10-02 19:32:47.290 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "563a4698-9f6f-4943-9653-401b25c49efc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:47 compute-0 nova_compute[194781]: 2025-10-02 19:32:47.363 2 DEBUG nova.compute.manager [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:32:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:47.469 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:47.470 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:32:47.472 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:47 compute-0 nova_compute[194781]: 2025-10-02 19:32:47.553 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:47 compute-0 nova_compute[194781]: 2025-10-02 19:32:47.554 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:47 compute-0 nova_compute[194781]: 2025-10-02 19:32:47.565 2 DEBUG nova.virt.hardware [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:32:47 compute-0 nova_compute[194781]: 2025-10-02 19:32:47.566 2 INFO nova.compute.claims [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:32:47 compute-0 nova_compute[194781]: 2025-10-02 19:32:47.706 2 DEBUG nova.compute.provider_tree [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:32:47 compute-0 nova_compute[194781]: 2025-10-02 19:32:47.727 2 DEBUG nova.scheduler.client.report [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:32:47 compute-0 nova_compute[194781]: 2025-10-02 19:32:47.783 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:47 compute-0 nova_compute[194781]: 2025-10-02 19:32:47.783 2 DEBUG nova.compute.manager [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:32:47 compute-0 nova_compute[194781]: 2025-10-02 19:32:47.822 2 DEBUG nova.compute.manager [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 19:32:47 compute-0 nova_compute[194781]: 2025-10-02 19:32:47.857 2 INFO nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:32:47 compute-0 nova_compute[194781]: 2025-10-02 19:32:47.904 2 DEBUG nova.compute.manager [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:32:48 compute-0 nova_compute[194781]: 2025-10-02 19:32:48.064 2 DEBUG nova.compute.manager [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:32:48 compute-0 nova_compute[194781]: 2025-10-02 19:32:48.065 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:32:48 compute-0 nova_compute[194781]: 2025-10-02 19:32:48.066 2 INFO nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Creating image(s)
Oct 02 19:32:48 compute-0 nova_compute[194781]: 2025-10-02 19:32:48.067 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "/var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:48 compute-0 nova_compute[194781]: 2025-10-02 19:32:48.068 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:48 compute-0 nova_compute[194781]: 2025-10-02 19:32:48.068 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:48 compute-0 nova_compute[194781]: 2025-10-02 19:32:48.069 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "731e6f3a25a50045fefcd1e8c54cf1a5094696c9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:48 compute-0 nova_compute[194781]: 2025-10-02 19:32:48.070 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "731e6f3a25a50045fefcd1e8c54cf1a5094696c9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:49 compute-0 nova_compute[194781]: 2025-10-02 19:32:49.461 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:49 compute-0 nova_compute[194781]: 2025-10-02 19:32:49.565 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9.part --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:49 compute-0 nova_compute[194781]: 2025-10-02 19:32:49.567 2 DEBUG nova.virt.images [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] 29d9b703-e91a-4723-bd6a-4e35237e80ee was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 19:32:49 compute-0 nova_compute[194781]: 2025-10-02 19:32:49.570 2 DEBUG nova.privsep.utils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 19:32:49 compute-0 nova_compute[194781]: 2025-10-02 19:32:49.571 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9.part /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:49 compute-0 podman[253263]: 2025-10-02 19:32:49.743223645 +0000 UTC m=+0.118087873 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 19:32:49 compute-0 podman[253268]: 2025-10-02 19:32:49.744803196 +0000 UTC m=+0.114408946 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.license=GPLv2)
Oct 02 19:32:49 compute-0 nova_compute[194781]: 2025-10-02 19:32:49.780 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9.part /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9.converted" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:49 compute-0 nova_compute[194781]: 2025-10-02 19:32:49.785 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:49 compute-0 nova_compute[194781]: 2025-10-02 19:32:49.857 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9.converted --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:49 compute-0 nova_compute[194781]: 2025-10-02 19:32:49.859 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "731e6f3a25a50045fefcd1e8c54cf1a5094696c9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:49 compute-0 nova_compute[194781]: 2025-10-02 19:32:49.872 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:49 compute-0 nova_compute[194781]: 2025-10-02 19:32:49.932 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:49 compute-0 nova_compute[194781]: 2025-10-02 19:32:49.934 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "731e6f3a25a50045fefcd1e8c54cf1a5094696c9" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:49 compute-0 nova_compute[194781]: 2025-10-02 19:32:49.935 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "731e6f3a25a50045fefcd1e8c54cf1a5094696c9" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:49 compute-0 nova_compute[194781]: 2025-10-02 19:32:49.966 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.043 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.044 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9,backing_fmt=raw /var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.098 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9,backing_fmt=raw /var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk 1073741824" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.100 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "731e6f3a25a50045fefcd1e8c54cf1a5094696c9" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.100 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.161 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.163 2 DEBUG nova.virt.disk.api [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Checking if we can resize image /var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.164 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.263 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.265 2 DEBUG nova.virt.disk.api [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Cannot resize image /var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.266 2 DEBUG nova.objects.instance [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lazy-loading 'migration_context' on Instance uuid 563a4698-9f6f-4943-9653-401b25c49efc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.327 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "/var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.328 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.330 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "/var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.361 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.441 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.443 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.444 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.469 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.545 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.547 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.594 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk.eph0 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.595 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.597 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.673 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.674 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.675 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Ensure instance console log exists: /var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.676 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.676 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.677 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.679 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:32:35Z,direct_url=<?>,disk_format='qcow2',id=29d9b703-e91a-4723-bd6a-4e35237e80ee,min_disk=0,min_ram=0,name='fvt_testing_image',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:32:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': '29d9b703-e91a-4723-bd6a-4e35237e80ee'}], 'ephemerals': [{'encrypted': False, 'size': 1, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encryption_options': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.686 2 WARNING nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.695 2 DEBUG nova.virt.libvirt.host [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.696 2 DEBUG nova.virt.libvirt.host [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.702 2 DEBUG nova.virt.libvirt.host [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.703 2 DEBUG nova.virt.libvirt.host [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.703 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.704 2 DEBUG nova.virt.hardware [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:32:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='cfab5635-5def-43ab-8514-c70004be3235',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-02T19:32:35Z,direct_url=<?>,disk_format='qcow2',id=29d9b703-e91a-4723-bd6a-4e35237e80ee,min_disk=0,min_ram=0,name='fvt_testing_image',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-02T19:32:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.705 2 DEBUG nova.virt.hardware [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.705 2 DEBUG nova.virt.hardware [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.706 2 DEBUG nova.virt.hardware [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.706 2 DEBUG nova.virt.hardware [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.707 2 DEBUG nova.virt.hardware [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.707 2 DEBUG nova.virt.hardware [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.708 2 DEBUG nova.virt.hardware [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.708 2 DEBUG nova.virt.hardware [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.709 2 DEBUG nova.virt.hardware [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.709 2 DEBUG nova.virt.hardware [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.715 2 DEBUG nova.objects.instance [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lazy-loading 'pci_devices' on Instance uuid 563a4698-9f6f-4943-9653-401b25c49efc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.730 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:32:50 compute-0 nova_compute[194781]:   <uuid>563a4698-9f6f-4943-9653-401b25c49efc</uuid>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   <name>instance-00000005</name>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   <memory>524288</memory>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <nova:name>fvt_testing_server</nova:name>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:32:50</nova:creationTime>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <nova:flavor name="fvt_testing_flavor">
Oct 02 19:32:50 compute-0 nova_compute[194781]:         <nova:memory>512</nova:memory>
Oct 02 19:32:50 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:32:50 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:32:50 compute-0 nova_compute[194781]:         <nova:ephemeral>1</nova:ephemeral>
Oct 02 19:32:50 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:32:50 compute-0 nova_compute[194781]:         <nova:user uuid="5e0565a40c4e40f9ab77ce190f9527c5">admin</nova:user>
Oct 02 19:32:50 compute-0 nova_compute[194781]:         <nova:project uuid="c6bd7784161a4cc3a2e8715feee92228">admin</nova:project>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="29d9b703-e91a-4723-bd6a-4e35237e80ee"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <nova:ports/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <system>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <entry name="serial">563a4698-9f6f-4943-9653-401b25c49efc</entry>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <entry name="uuid">563a4698-9f6f-4943-9653-401b25c49efc</entry>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     </system>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   <os>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   </os>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   <features>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   </features>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk.eph0"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <target dev="vdb" bus="virtio"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk.config"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/console.log" append="off"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <video>
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     </video>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:32:50 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:32:50 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:32:50 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:32:50 compute-0 nova_compute[194781]: </domain>
Oct 02 19:32:50 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.830 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.830 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.831 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:32:50 compute-0 nova_compute[194781]: 2025-10-02 19:32:50.831 2 INFO nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Using config drive
Oct 02 19:32:51 compute-0 nova_compute[194781]: 2025-10-02 19:32:51.259 2 INFO nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Creating config drive at /var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk.config
Oct 02 19:32:51 compute-0 nova_compute[194781]: 2025-10-02 19:32:51.269 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk4d6sbhf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:32:51 compute-0 nova_compute[194781]: 2025-10-02 19:32:51.399 2 DEBUG oslo_concurrency.processutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk4d6sbhf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:32:51 compute-0 systemd-machined[154795]: New machine qemu-5-instance-00000005.
Oct 02 19:32:51 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Oct 02 19:32:52 compute-0 podman[253364]: 2025-10-02 19:32:52.121552685 +0000 UTC m=+0.085698225 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd)
Oct 02 19:32:52 compute-0 podman[253363]: 2025-10-02 19:32:52.123618899 +0000 UTC m=+0.092145283 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:32:52 compute-0 podman[253361]: 2025-10-02 19:32:52.171622836 +0000 UTC m=+0.128471615 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.688 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759433572.6876965, 563a4698-9f6f-4943-9653-401b25c49efc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.689 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] VM Resumed (Lifecycle Event)
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.691 2 DEBUG nova.compute.manager [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.691 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.695 2 INFO nova.virt.libvirt.driver [-] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Instance spawned successfully.
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.696 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.716 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.725 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.731 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.732 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.732 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.733 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.733 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.734 2 DEBUG nova.virt.libvirt.driver [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.760 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.760 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759433572.6880052, 563a4698-9f6f-4943-9653-401b25c49efc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.761 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] VM Started (Lifecycle Event)
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.790 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.796 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.803 2 INFO nova.compute.manager [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Took 4.74 seconds to spawn the instance on the hypervisor.
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.803 2 DEBUG nova.compute.manager [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.817 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.871 2 INFO nova.compute.manager [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Took 5.35 seconds to build instance.
Oct 02 19:32:52 compute-0 nova_compute[194781]: 2025-10-02 19:32:52.897 2 DEBUG oslo_concurrency.lockutils [None req-bd774a4a-61e3-4fc3-8224-f295de7adbb2 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "563a4698-9f6f-4943-9653-401b25c49efc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:32:53 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 19:32:53 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 19:32:55 compute-0 nova_compute[194781]: 2025-10-02 19:32:55.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:55 compute-0 nova_compute[194781]: 2025-10-02 19:32:55.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:32:59 compute-0 podman[253436]: 2025-10-02 19:32:59.688791526 +0000 UTC m=+0.063856723 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:32:59 compute-0 podman[253437]: 2025-10-02 19:32:59.69237349 +0000 UTC m=+0.063888894 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:32:59 compute-0 podman[209015]: time="2025-10-02T19:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:32:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:32:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5211 "" "Go-http-client/1.1"
Oct 02 19:33:00 compute-0 nova_compute[194781]: 2025-10-02 19:33:00.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:00 compute-0 nova_compute[194781]: 2025-10-02 19:33:00.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:01 compute-0 openstack_network_exporter[211160]: ERROR   19:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:33:01 compute-0 openstack_network_exporter[211160]: ERROR   19:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:33:01 compute-0 openstack_network_exporter[211160]: ERROR   19:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:33:01 compute-0 openstack_network_exporter[211160]: ERROR   19:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:33:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:33:01 compute-0 openstack_network_exporter[211160]: ERROR   19:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:33:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:33:02 compute-0 podman[253478]: 2025-10-02 19:33:02.720619266 +0000 UTC m=+0.091141017 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 19:33:02 compute-0 podman[253479]: 2025-10-02 19:33:02.760932442 +0000 UTC m=+0.130844437 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:33:05 compute-0 nova_compute[194781]: 2025-10-02 19:33:05.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:05 compute-0 nova_compute[194781]: 2025-10-02 19:33:05.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:10 compute-0 nova_compute[194781]: 2025-10-02 19:33:10.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:10 compute-0 nova_compute[194781]: 2025-10-02 19:33:10.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:11 compute-0 nova_compute[194781]: 2025-10-02 19:33:11.725 2 DEBUG oslo_concurrency.lockutils [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "563a4698-9f6f-4943-9653-401b25c49efc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:33:11 compute-0 nova_compute[194781]: 2025-10-02 19:33:11.725 2 DEBUG oslo_concurrency.lockutils [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "563a4698-9f6f-4943-9653-401b25c49efc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:33:11 compute-0 nova_compute[194781]: 2025-10-02 19:33:11.726 2 DEBUG oslo_concurrency.lockutils [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "563a4698-9f6f-4943-9653-401b25c49efc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:33:11 compute-0 nova_compute[194781]: 2025-10-02 19:33:11.726 2 DEBUG oslo_concurrency.lockutils [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "563a4698-9f6f-4943-9653-401b25c49efc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:33:11 compute-0 nova_compute[194781]: 2025-10-02 19:33:11.726 2 DEBUG oslo_concurrency.lockutils [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "563a4698-9f6f-4943-9653-401b25c49efc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:33:11 compute-0 nova_compute[194781]: 2025-10-02 19:33:11.727 2 INFO nova.compute.manager [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Terminating instance
Oct 02 19:33:11 compute-0 nova_compute[194781]: 2025-10-02 19:33:11.728 2 DEBUG oslo_concurrency.lockutils [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "refresh_cache-563a4698-9f6f-4943-9653-401b25c49efc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:33:11 compute-0 nova_compute[194781]: 2025-10-02 19:33:11.728 2 DEBUG oslo_concurrency.lockutils [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquired lock "refresh_cache-563a4698-9f6f-4943-9653-401b25c49efc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:33:11 compute-0 nova_compute[194781]: 2025-10-02 19:33:11.729 2 DEBUG nova.network.neutron [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:33:11 compute-0 nova_compute[194781]: 2025-10-02 19:33:11.879 2 DEBUG nova.network.neutron [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:33:12 compute-0 nova_compute[194781]: 2025-10-02 19:33:12.192 2 DEBUG nova.network.neutron [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:33:12 compute-0 nova_compute[194781]: 2025-10-02 19:33:12.216 2 DEBUG oslo_concurrency.lockutils [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Releasing lock "refresh_cache-563a4698-9f6f-4943-9653-401b25c49efc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:33:12 compute-0 nova_compute[194781]: 2025-10-02 19:33:12.217 2 DEBUG nova.compute.manager [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:33:12 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct 02 19:33:12 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 20.970s CPU time.
Oct 02 19:33:12 compute-0 systemd-machined[154795]: Machine qemu-5-instance-00000005 terminated.
Oct 02 19:33:12 compute-0 podman[253524]: 2025-10-02 19:33:12.339791361 +0000 UTC m=+0.071738739 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:33:12 compute-0 nova_compute[194781]: 2025-10-02 19:33:12.508 2 INFO nova.virt.libvirt.driver [-] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Instance destroyed successfully.
Oct 02 19:33:12 compute-0 nova_compute[194781]: 2025-10-02 19:33:12.508 2 DEBUG nova.objects.instance [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lazy-loading 'resources' on Instance uuid 563a4698-9f6f-4943-9653-401b25c49efc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:33:12 compute-0 nova_compute[194781]: 2025-10-02 19:33:12.522 2 INFO nova.virt.libvirt.driver [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Deleting instance files /var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc_del
Oct 02 19:33:12 compute-0 nova_compute[194781]: 2025-10-02 19:33:12.522 2 INFO nova.virt.libvirt.driver [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Deletion of /var/lib/nova/instances/563a4698-9f6f-4943-9653-401b25c49efc_del complete
Oct 02 19:33:12 compute-0 nova_compute[194781]: 2025-10-02 19:33:12.581 2 INFO nova.compute.manager [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Took 0.36 seconds to destroy the instance on the hypervisor.
Oct 02 19:33:12 compute-0 nova_compute[194781]: 2025-10-02 19:33:12.582 2 DEBUG oslo.service.loopingcall [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:33:12 compute-0 nova_compute[194781]: 2025-10-02 19:33:12.583 2 DEBUG nova.compute.manager [-] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:33:12 compute-0 nova_compute[194781]: 2025-10-02 19:33:12.583 2 DEBUG nova.network.neutron [-] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.943 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.944 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.956 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.956 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.956 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.956 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.956 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:33:12.956810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:12.999 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 43550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.000 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.001 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.001 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.002 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.002 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.003 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.004 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:33:13.002525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.004 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.004 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:33:13.005352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.012 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.014 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.015 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:33:13.014764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.016 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.018 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:33:13.017996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.020 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.021 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:33:13.021378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.023 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.024 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:33:13.023762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.026 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.027 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.028 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:33:13.029320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.121 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.122 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.123 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.124 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.126 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:33:13.125448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.127 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.128 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.129 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.129 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.130 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.130 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:33:13.127961) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.131 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.132 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:33:13.131058) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.174 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.175 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.176 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.176 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.177 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.177 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.178 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.178 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.178 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.179 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.180 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:33:13.178496) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.182 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.182 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.182 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.183 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.183 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:33:13.183319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.184 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.185 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.186 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.186 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.186 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.187 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.187 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.188 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.189 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:33:13.187891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.190 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.190 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.190 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.191 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.192 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:33:13.191300) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.193 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.194 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.194 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.194 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.195 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.195 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.196 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.196 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:33:13.194959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.197 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.197 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.197 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.197 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.198 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:33:13.197694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.199 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.200 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.200 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.201 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.201 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.201 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.202 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.203 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.203 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.204 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.204 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.205 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.205 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:33:13.200792) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.206 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:33:13.203512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.206 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.206 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.206 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.207 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:33:13.206100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.207 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.207 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.207 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.207 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.208 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.208 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:33:13.207824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.209 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.209 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.209 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.209 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.210 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.210 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:33:13.209484) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.211 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.211 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.212 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.212 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:33:13.211099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.213 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.213 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.213 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.213 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.214 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:33:13.213889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.214 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:33:13.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:33:13 compute-0 nova_compute[194781]: 2025-10-02 19:33:13.576 2 DEBUG nova.network.neutron [-] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:33:13 compute-0 nova_compute[194781]: 2025-10-02 19:33:13.588 2 DEBUG nova.network.neutron [-] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:33:13 compute-0 nova_compute[194781]: 2025-10-02 19:33:13.602 2 INFO nova.compute.manager [-] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Took 1.02 seconds to deallocate network for instance.
Oct 02 19:33:13 compute-0 nova_compute[194781]: 2025-10-02 19:33:13.635 2 DEBUG oslo_concurrency.lockutils [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:33:13 compute-0 nova_compute[194781]: 2025-10-02 19:33:13.635 2 DEBUG oslo_concurrency.lockutils [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:33:13 compute-0 nova_compute[194781]: 2025-10-02 19:33:13.718 2 DEBUG nova.compute.provider_tree [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:33:13 compute-0 nova_compute[194781]: 2025-10-02 19:33:13.733 2 DEBUG nova.scheduler.client.report [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:33:13 compute-0 nova_compute[194781]: 2025-10-02 19:33:13.754 2 DEBUG oslo_concurrency.lockutils [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:33:13 compute-0 nova_compute[194781]: 2025-10-02 19:33:13.788 2 INFO nova.scheduler.client.report [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Deleted allocations for instance 563a4698-9f6f-4943-9653-401b25c49efc
Oct 02 19:33:13 compute-0 nova_compute[194781]: 2025-10-02 19:33:13.880 2 DEBUG oslo_concurrency.lockutils [None req-a18cafe5-4fb5-4b1e-895e-ef2f751a8c75 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Lock "563a4698-9f6f-4943-9653-401b25c49efc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:33:15 compute-0 nova_compute[194781]: 2025-10-02 19:33:15.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:15 compute-0 nova_compute[194781]: 2025-10-02 19:33:15.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:20 compute-0 nova_compute[194781]: 2025-10-02 19:33:20.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:20 compute-0 podman[253563]: 2025-10-02 19:33:20.759823055 +0000 UTC m=+0.124029038 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:33:20 compute-0 podman[253562]: 2025-10-02 19:33:20.761042067 +0000 UTC m=+0.118255177 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 19:33:20 compute-0 nova_compute[194781]: 2025-10-02 19:33:20.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:21 compute-0 nova_compute[194781]: 2025-10-02 19:33:21.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:22 compute-0 podman[253599]: 2025-10-02 19:33:22.734864307 +0000 UTC m=+0.096542379 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-type=git, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, release-0.7.12=, maintainer=Red Hat, Inc., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543)
Oct 02 19:33:22 compute-0 podman[253600]: 2025-10-02 19:33:22.743554364 +0000 UTC m=+0.105313868 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:33:22 compute-0 podman[253598]: 2025-10-02 19:33:22.754398918 +0000 UTC m=+0.126351029 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git)
Oct 02 19:33:23 compute-0 nova_compute[194781]: 2025-10-02 19:33:23.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:24 compute-0 nova_compute[194781]: 2025-10-02 19:33:24.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:25 compute-0 nova_compute[194781]: 2025-10-02 19:33:25.030 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:25 compute-0 nova_compute[194781]: 2025-10-02 19:33:25.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:25 compute-0 nova_compute[194781]: 2025-10-02 19:33:25.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:26 compute-0 nova_compute[194781]: 2025-10-02 19:33:26.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:26 compute-0 nova_compute[194781]: 2025-10-02 19:33:26.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:33:27 compute-0 nova_compute[194781]: 2025-10-02 19:33:27.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:27 compute-0 nova_compute[194781]: 2025-10-02 19:33:27.504 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759433592.5030925, 563a4698-9f6f-4943-9653-401b25c49efc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:33:27 compute-0 nova_compute[194781]: 2025-10-02 19:33:27.504 2 INFO nova.compute.manager [-] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] VM Stopped (Lifecycle Event)
Oct 02 19:33:27 compute-0 nova_compute[194781]: 2025-10-02 19:33:27.534 2 DEBUG nova.compute.manager [None req-678de96d-9f38-4fcc-a8f6-49a5e3b022fc - - - - - -] [instance: 563a4698-9f6f-4943-9653-401b25c49efc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:33:28 compute-0 nova_compute[194781]: 2025-10-02 19:33:28.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:29 compute-0 nova_compute[194781]: 2025-10-02 19:33:29.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:29 compute-0 nova_compute[194781]: 2025-10-02 19:33:29.110 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:33:29 compute-0 nova_compute[194781]: 2025-10-02 19:33:29.111 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:33:29 compute-0 nova_compute[194781]: 2025-10-02 19:33:29.112 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:33:29 compute-0 nova_compute[194781]: 2025-10-02 19:33:29.113 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:33:29 compute-0 nova_compute[194781]: 2025-10-02 19:33:29.234 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:33:29 compute-0 nova_compute[194781]: 2025-10-02 19:33:29.332 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:33:29 compute-0 nova_compute[194781]: 2025-10-02 19:33:29.333 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:33:29 compute-0 nova_compute[194781]: 2025-10-02 19:33:29.444 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:33:29 compute-0 nova_compute[194781]: 2025-10-02 19:33:29.446 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:33:29 compute-0 nova_compute[194781]: 2025-10-02 19:33:29.544 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:33:29 compute-0 nova_compute[194781]: 2025-10-02 19:33:29.545 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:33:29 compute-0 nova_compute[194781]: 2025-10-02 19:33:29.606 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:33:29 compute-0 podman[209015]: time="2025-10-02T19:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:33:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:33:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5213 "" "Go-http-client/1.1"
Oct 02 19:33:30 compute-0 nova_compute[194781]: 2025-10-02 19:33:30.027 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:33:30 compute-0 nova_compute[194781]: 2025-10-02 19:33:30.029 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5095MB free_disk=72.47880554199219GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:33:30 compute-0 nova_compute[194781]: 2025-10-02 19:33:30.030 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:33:30 compute-0 nova_compute[194781]: 2025-10-02 19:33:30.031 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:33:30 compute-0 nova_compute[194781]: 2025-10-02 19:33:30.165 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:33:30 compute-0 nova_compute[194781]: 2025-10-02 19:33:30.166 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:33:30 compute-0 nova_compute[194781]: 2025-10-02 19:33:30.166 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:33:30 compute-0 nova_compute[194781]: 2025-10-02 19:33:30.228 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:33:30 compute-0 nova_compute[194781]: 2025-10-02 19:33:30.270 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:33:30 compute-0 nova_compute[194781]: 2025-10-02 19:33:30.310 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:33:30 compute-0 nova_compute[194781]: 2025-10-02 19:33:30.312 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:33:30 compute-0 sshd-session[252972]: Received disconnect from 38.102.83.227 port 53228:11: disconnected by user
Oct 02 19:33:30 compute-0 sshd-session[252972]: Disconnected from user zuul 38.102.83.227 port 53228
Oct 02 19:33:30 compute-0 sshd-session[252969]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:33:30 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Oct 02 19:33:30 compute-0 systemd[1]: session-31.scope: Consumed 1.220s CPU time.
Oct 02 19:33:30 compute-0 systemd-logind[798]: Session 31 logged out. Waiting for processes to exit.
Oct 02 19:33:30 compute-0 systemd-logind[798]: Removed session 31.
Oct 02 19:33:30 compute-0 podman[253669]: 2025-10-02 19:33:30.479715263 +0000 UTC m=+0.074203543 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct 02 19:33:30 compute-0 podman[253668]: 2025-10-02 19:33:30.486406429 +0000 UTC m=+0.080837328 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:33:30 compute-0 nova_compute[194781]: 2025-10-02 19:33:30.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:30 compute-0 nova_compute[194781]: 2025-10-02 19:33:30.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:31 compute-0 openstack_network_exporter[211160]: ERROR   19:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:33:31 compute-0 openstack_network_exporter[211160]: ERROR   19:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:33:31 compute-0 openstack_network_exporter[211160]: ERROR   19:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:33:31 compute-0 openstack_network_exporter[211160]: ERROR   19:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:33:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:33:31 compute-0 openstack_network_exporter[211160]: ERROR   19:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:33:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:33:33 compute-0 podman[253714]: 2025-10-02 19:33:33.730023455 +0000 UTC m=+0.090357316 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 02 19:33:33 compute-0 podman[253715]: 2025-10-02 19:33:33.776963283 +0000 UTC m=+0.142377888 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:33:35 compute-0 nova_compute[194781]: 2025-10-02 19:33:35.314 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:33:35 compute-0 nova_compute[194781]: 2025-10-02 19:33:35.314 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:33:35 compute-0 nova_compute[194781]: 2025-10-02 19:33:35.315 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:33:35 compute-0 nova_compute[194781]: 2025-10-02 19:33:35.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:35 compute-0 nova_compute[194781]: 2025-10-02 19:33:35.607 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:33:35 compute-0 nova_compute[194781]: 2025-10-02 19:33:35.608 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:33:35 compute-0 nova_compute[194781]: 2025-10-02 19:33:35.609 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:33:35 compute-0 nova_compute[194781]: 2025-10-02 19:33:35.610 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:33:35 compute-0 nova_compute[194781]: 2025-10-02 19:33:35.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:36 compute-0 nova_compute[194781]: 2025-10-02 19:33:36.617 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:33:36 compute-0 nova_compute[194781]: 2025-10-02 19:33:36.633 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:33:36 compute-0 nova_compute[194781]: 2025-10-02 19:33:36.634 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:33:40 compute-0 nova_compute[194781]: 2025-10-02 19:33:40.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:40 compute-0 nova_compute[194781]: 2025-10-02 19:33:40.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:42 compute-0 podman[253758]: 2025-10-02 19:33:42.714567186 +0000 UTC m=+0.088990111 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:33:45 compute-0 nova_compute[194781]: 2025-10-02 19:33:45.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:45 compute-0 nova_compute[194781]: 2025-10-02 19:33:45.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:33:47.471 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:33:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:33:47.472 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:33:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:33:47.473 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:33:50 compute-0 sshd-session[253781]: Accepted publickey for zuul from 38.102.83.227 port 54348 ssh2: RSA SHA256:Cqypmgs6gPK5am/EoWoj7JixM3d03JX7hfQ1lfNOky8
Oct 02 19:33:50 compute-0 systemd-logind[798]: New session 32 of user zuul.
Oct 02 19:33:50 compute-0 systemd[1]: Started Session 32 of User zuul.
Oct 02 19:33:50 compute-0 sshd-session[253781]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:33:50 compute-0 nova_compute[194781]: 2025-10-02 19:33:50.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:50 compute-0 nova_compute[194781]: 2025-10-02 19:33:50.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:51 compute-0 podman[253933]: 2025-10-02 19:33:51.107143863 +0000 UTC m=+0.094768923 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Oct 02 19:33:51 compute-0 sudo[253993]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btnmifkdcpkioouihqspjsqfsbmatrnd ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433630.3464746-56009-271015229045073/AnsiballZ_command.py'
Oct 02 19:33:51 compute-0 podman[253934]: 2025-10-02 19:33:51.126379116 +0000 UTC m=+0.112049315 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 19:33:51 compute-0 sudo[253993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:51 compute-0 python3[253996]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:33:51 compute-0 sudo[253993]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:53 compute-0 podman[254037]: 2025-10-02 19:33:53.719578432 +0000 UTC m=+0.076851253 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 19:33:53 compute-0 podman[254035]: 2025-10-02 19:33:53.724305796 +0000 UTC m=+0.089524165 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41)
Oct 02 19:33:53 compute-0 podman[254036]: 2025-10-02 19:33:53.7427771 +0000 UTC m=+0.096146539 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, release-0.7.12=, name=ubi9, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-type=git, version=9.4)
Oct 02 19:33:55 compute-0 nova_compute[194781]: 2025-10-02 19:33:55.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:55 compute-0 nova_compute[194781]: 2025-10-02 19:33:55.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:33:58 compute-0 sudo[254265]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztmxnwdtnzotmznvogrwkiwwuxxyotfq ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433638.1018782-56170-155791692847704/AnsiballZ_command.py'
Oct 02 19:33:58 compute-0 sudo[254265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:33:58 compute-0 python3[254267]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:33:58 compute-0 sudo[254265]: pam_unix(sudo:session): session closed for user root
Oct 02 19:33:59 compute-0 podman[209015]: time="2025-10-02T19:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:33:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:33:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5206 "" "Go-http-client/1.1"
Oct 02 19:34:00 compute-0 nova_compute[194781]: 2025-10-02 19:34:00.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:00 compute-0 podman[254307]: 2025-10-02 19:34:00.733216369 +0000 UTC m=+0.101597441 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:34:00 compute-0 podman[254308]: 2025-10-02 19:34:00.756662223 +0000 UTC m=+0.120189568 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid)
Oct 02 19:34:00 compute-0 nova_compute[194781]: 2025-10-02 19:34:00.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:01 compute-0 openstack_network_exporter[211160]: ERROR   19:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:34:01 compute-0 openstack_network_exporter[211160]: ERROR   19:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:34:01 compute-0 openstack_network_exporter[211160]: ERROR   19:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:34:01 compute-0 openstack_network_exporter[211160]: ERROR   19:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:34:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:34:01 compute-0 openstack_network_exporter[211160]: ERROR   19:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:34:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:34:04 compute-0 podman[254351]: 2025-10-02 19:34:04.733745773 +0000 UTC m=+0.105748970 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 19:34:04 compute-0 podman[254352]: 2025-10-02 19:34:04.759820516 +0000 UTC m=+0.120868156 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 02 19:34:05 compute-0 nova_compute[194781]: 2025-10-02 19:34:05.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:05 compute-0 nova_compute[194781]: 2025-10-02 19:34:05.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:08 compute-0 sudo[254566]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvxcjmncdhbgsrsshzdkcthanuxsckew ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433647.69191-56325-10683257336505/AnsiballZ_command.py'
Oct 02 19:34:08 compute-0 sudo[254566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:08 compute-0 python3[254568]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:34:08 compute-0 sudo[254566]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:10 compute-0 nova_compute[194781]: 2025-10-02 19:34:10.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:10 compute-0 nova_compute[194781]: 2025-10-02 19:34:10.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:13 compute-0 podman[254608]: 2025-10-02 19:34:13.788400838 +0000 UTC m=+0.146167358 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:34:15 compute-0 nova_compute[194781]: 2025-10-02 19:34:15.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:15 compute-0 nova_compute[194781]: 2025-10-02 19:34:15.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:20 compute-0 nova_compute[194781]: 2025-10-02 19:34:20.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:20 compute-0 nova_compute[194781]: 2025-10-02 19:34:20.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:21 compute-0 nova_compute[194781]: 2025-10-02 19:34:21.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:21 compute-0 podman[254633]: 2025-10-02 19:34:21.741301155 +0000 UTC m=+0.102918325 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true)
Oct 02 19:34:21 compute-0 podman[254632]: 2025-10-02 19:34:21.758752742 +0000 UTC m=+0.121208654 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:34:23 compute-0 sudo[254844]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udgymypotsceevxdqfajmfaxdyafiphj ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759433662.7528195-56542-175092485464925/AnsiballZ_command.py'
Oct 02 19:34:23 compute-0 sudo[254844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:34:23 compute-0 python3[254846]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 19:34:23 compute-0 sudo[254844]: pam_unix(sudo:session): session closed for user root
Oct 02 19:34:24 compute-0 nova_compute[194781]: 2025-10-02 19:34:24.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:24 compute-0 nova_compute[194781]: 2025-10-02 19:34:24.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:24 compute-0 podman[254888]: 2025-10-02 19:34:24.756791938 +0000 UTC m=+0.107013752 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:34:24 compute-0 podman[254887]: 2025-10-02 19:34:24.765034334 +0000 UTC m=+0.116748117 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, distribution-scope=public, name=ubi9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler)
Oct 02 19:34:24 compute-0 podman[254886]: 2025-10-02 19:34:24.787208965 +0000 UTC m=+0.144716350 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=edpm, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64)
Oct 02 19:34:25 compute-0 nova_compute[194781]: 2025-10-02 19:34:25.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:25 compute-0 nova_compute[194781]: 2025-10-02 19:34:25.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:26 compute-0 nova_compute[194781]: 2025-10-02 19:34:26.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:26 compute-0 nova_compute[194781]: 2025-10-02 19:34:26.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:34:27 compute-0 nova_compute[194781]: 2025-10-02 19:34:27.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:28 compute-0 nova_compute[194781]: 2025-10-02 19:34:28.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:28 compute-0 nova_compute[194781]: 2025-10-02 19:34:28.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:29 compute-0 podman[209015]: time="2025-10-02T19:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:34:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:34:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5216 "" "Go-http-client/1.1"
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.063 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.063 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.063 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.063 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.170 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.255 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.256 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.360 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.363 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.471 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.474 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.561 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:30 compute-0 nova_compute[194781]: 2025-10-02 19:34:30.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:31 compute-0 nova_compute[194781]: 2025-10-02 19:34:31.010 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:34:31 compute-0 nova_compute[194781]: 2025-10-02 19:34:31.011 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5130MB free_disk=72.47874450683594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:34:31 compute-0 nova_compute[194781]: 2025-10-02 19:34:31.011 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:34:31 compute-0 nova_compute[194781]: 2025-10-02 19:34:31.012 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:34:31 compute-0 nova_compute[194781]: 2025-10-02 19:34:31.099 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:34:31 compute-0 nova_compute[194781]: 2025-10-02 19:34:31.099 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:34:31 compute-0 nova_compute[194781]: 2025-10-02 19:34:31.099 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:34:31 compute-0 nova_compute[194781]: 2025-10-02 19:34:31.152 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:34:31 compute-0 nova_compute[194781]: 2025-10-02 19:34:31.166 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:34:31 compute-0 nova_compute[194781]: 2025-10-02 19:34:31.168 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:34:31 compute-0 nova_compute[194781]: 2025-10-02 19:34:31.169 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:34:31 compute-0 openstack_network_exporter[211160]: ERROR   19:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:34:31 compute-0 openstack_network_exporter[211160]: ERROR   19:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:34:31 compute-0 openstack_network_exporter[211160]: ERROR   19:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:34:31 compute-0 openstack_network_exporter[211160]: ERROR   19:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:34:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:34:31 compute-0 openstack_network_exporter[211160]: ERROR   19:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:34:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:34:31 compute-0 podman[254953]: 2025-10-02 19:34:31.714287771 +0000 UTC m=+0.071950205 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:34:31 compute-0 podman[254954]: 2025-10-02 19:34:31.730100035 +0000 UTC m=+0.092336098 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 19:34:32 compute-0 nova_compute[194781]: 2025-10-02 19:34:32.165 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:35 compute-0 nova_compute[194781]: 2025-10-02 19:34:35.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:35 compute-0 podman[254993]: 2025-10-02 19:34:35.755625873 +0000 UTC m=+0.117950759 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:34:35 compute-0 podman[254994]: 2025-10-02 19:34:35.802415589 +0000 UTC m=+0.150847591 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Oct 02 19:34:35 compute-0 nova_compute[194781]: 2025-10-02 19:34:35.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:36 compute-0 nova_compute[194781]: 2025-10-02 19:34:36.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:34:36 compute-0 nova_compute[194781]: 2025-10-02 19:34:36.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:34:36 compute-0 nova_compute[194781]: 2025-10-02 19:34:36.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:34:36 compute-0 nova_compute[194781]: 2025-10-02 19:34:36.641 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:34:36 compute-0 nova_compute[194781]: 2025-10-02 19:34:36.642 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:34:36 compute-0 nova_compute[194781]: 2025-10-02 19:34:36.642 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:34:36 compute-0 nova_compute[194781]: 2025-10-02 19:34:36.642 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:34:38 compute-0 nova_compute[194781]: 2025-10-02 19:34:38.808 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:34:38 compute-0 nova_compute[194781]: 2025-10-02 19:34:38.825 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:34:38 compute-0 nova_compute[194781]: 2025-10-02 19:34:38.826 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:34:40 compute-0 nova_compute[194781]: 2025-10-02 19:34:40.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:40 compute-0 nova_compute[194781]: 2025-10-02 19:34:40.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:44 compute-0 podman[255038]: 2025-10-02 19:34:44.723342298 +0000 UTC m=+0.092526652 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:34:45 compute-0 nova_compute[194781]: 2025-10-02 19:34:45.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:45 compute-0 nova_compute[194781]: 2025-10-02 19:34:45.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:34:47.473 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:34:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:34:47.474 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:34:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:34:47.475 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:34:50 compute-0 nova_compute[194781]: 2025-10-02 19:34:50.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:50 compute-0 nova_compute[194781]: 2025-10-02 19:34:50.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:52 compute-0 podman[255062]: 2025-10-02 19:34:52.719151204 +0000 UTC m=+0.094822352 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:34:52 compute-0 podman[255063]: 2025-10-02 19:34:52.731522632 +0000 UTC m=+0.093374044 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:34:53 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 19:34:55 compute-0 nova_compute[194781]: 2025-10-02 19:34:55.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:55 compute-0 podman[255097]: 2025-10-02 19:34:55.713016657 +0000 UTC m=+0.083679278 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Oct 02 19:34:55 compute-0 podman[255098]: 2025-10-02 19:34:55.715845422 +0000 UTC m=+0.088309580 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_id=edpm, container_name=kepler, vendor=Red Hat, Inc., version=9.4, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Oct 02 19:34:55 compute-0 podman[255099]: 2025-10-02 19:34:55.722691633 +0000 UTC m=+0.079419694 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:34:55 compute-0 nova_compute[194781]: 2025-10-02 19:34:55.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:34:59 compute-0 podman[209015]: time="2025-10-02T19:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:34:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:34:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5212 "" "Go-http-client/1.1"
Oct 02 19:35:00 compute-0 nova_compute[194781]: 2025-10-02 19:35:00.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:00 compute-0 nova_compute[194781]: 2025-10-02 19:35:00.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:01 compute-0 openstack_network_exporter[211160]: ERROR   19:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:35:01 compute-0 openstack_network_exporter[211160]: ERROR   19:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:35:01 compute-0 openstack_network_exporter[211160]: ERROR   19:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:35:01 compute-0 openstack_network_exporter[211160]: ERROR   19:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:35:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:35:01 compute-0 openstack_network_exporter[211160]: ERROR   19:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:35:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:35:02 compute-0 podman[255156]: 2025-10-02 19:35:02.701819004 +0000 UTC m=+0.070962591 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:35:02 compute-0 podman[255155]: 2025-10-02 19:35:02.728528471 +0000 UTC m=+0.100806541 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:35:05 compute-0 nova_compute[194781]: 2025-10-02 19:35:05.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:05 compute-0 nova_compute[194781]: 2025-10-02 19:35:05.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:06 compute-0 podman[255195]: 2025-10-02 19:35:06.736474911 +0000 UTC m=+0.104907250 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:35:06 compute-0 podman[255196]: 2025-10-02 19:35:06.779588833 +0000 UTC m=+0.151765032 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 19:35:10 compute-0 nova_compute[194781]: 2025-10-02 19:35:10.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:10 compute-0 nova_compute[194781]: 2025-10-02 19:35:10.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.943 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.944 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.944 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.953 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.954 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.954 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.955 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.955 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:12.956 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:35:12.955565) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.008 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 45270000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.010 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:35:13.011264) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.012 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.013 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:35:13.014086) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.019 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.020 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.021 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:35:13.021400) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.022 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.023 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.024 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.025 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.026 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.027 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:35:13.023507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:35:13.025740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.028 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.028 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.029 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.029 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.029 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.030 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.030 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:35:13.028381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.030 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.031 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:35:13.031066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.111 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.111 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.112 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.112 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.113 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.113 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.114 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.114 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.115 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.116 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:35:13.113878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.116 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.116 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:35:13.116140) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.118 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:35:13.119879) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.152 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.152 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.152 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.153 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.153 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.153 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.153 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.153 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.153 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.154 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.154 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.154 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.155 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.155 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.155 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.155 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.155 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.155 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:35:13.153752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.155 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.155 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.156 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.156 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.157 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.157 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.157 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.157 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.157 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.158 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.158 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.158 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.158 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.159 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.159 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.159 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.159 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.159 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.159 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.160 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:35:13.155712) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.160 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:35:13.157386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.160 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:35:13.158396) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:35:13.159955) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.161 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.161 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.162 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.162 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.162 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.162 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:35:13.161701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.163 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.163 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.163 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.163 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.163 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.164 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.164 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.164 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.165 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.165 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.165 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.165 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.165 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:35:13.163678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:35:13.165357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.165 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.166 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.166 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.166 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.166 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.166 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.166 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.167 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.167 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.167 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:35:13.166945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.167 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.167 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.167 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.167 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.168 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.168 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.168 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.168 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.168 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:35:13.168033) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.169 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.169 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.169 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:35:13.169148) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.169 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.170 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.170 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.170 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.170 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.170 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.170 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.171 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.171 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:35:13.170132) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.171 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.171 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.171 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.171 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.171 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.172 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:35:13.171403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.172 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.172 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.172 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.173 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.173 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.173 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.173 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.173 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.173 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.173 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.173 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.174 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.174 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.174 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.174 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.174 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.174 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.174 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.175 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.175 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.175 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.175 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.175 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.175 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.175 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:35:13.175 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:35:14 compute-0 podman[255238]: 2025-10-02 19:35:14.88298108 +0000 UTC m=+0.112994574 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:35:15 compute-0 nova_compute[194781]: 2025-10-02 19:35:15.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:15 compute-0 nova_compute[194781]: 2025-10-02 19:35:15.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:20 compute-0 nova_compute[194781]: 2025-10-02 19:35:20.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:20 compute-0 nova_compute[194781]: 2025-10-02 19:35:20.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:23 compute-0 nova_compute[194781]: 2025-10-02 19:35:23.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:23 compute-0 sshd-session[253784]: Received disconnect from 38.102.83.227 port 54348:11: disconnected by user
Oct 02 19:35:23 compute-0 sshd-session[253784]: Disconnected from user zuul 38.102.83.227 port 54348
Oct 02 19:35:23 compute-0 sshd-session[253781]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:35:23 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Oct 02 19:35:23 compute-0 systemd[1]: session-32.scope: Consumed 4.915s CPU time.
Oct 02 19:35:23 compute-0 systemd-logind[798]: Session 32 logged out. Waiting for processes to exit.
Oct 02 19:35:23 compute-0 systemd-logind[798]: Removed session 32.
Oct 02 19:35:23 compute-0 podman[255264]: 2025-10-02 19:35:23.494366395 +0000 UTC m=+0.070548329 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Oct 02 19:35:23 compute-0 podman[255262]: 2025-10-02 19:35:23.512342741 +0000 UTC m=+0.090595770 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi)
Oct 02 19:35:24 compute-0 nova_compute[194781]: 2025-10-02 19:35:24.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:24 compute-0 nova_compute[194781]: 2025-10-02 19:35:24.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 19:35:24 compute-0 nova_compute[194781]: 2025-10-02 19:35:24.051 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 19:35:25 compute-0 nova_compute[194781]: 2025-10-02 19:35:25.052 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:25 compute-0 nova_compute[194781]: 2025-10-02 19:35:25.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:25 compute-0 nova_compute[194781]: 2025-10-02 19:35:25.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:26 compute-0 nova_compute[194781]: 2025-10-02 19:35:26.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:26 compute-0 podman[255301]: 2025-10-02 19:35:26.747471367 +0000 UTC m=+0.101667654 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct 02 19:35:26 compute-0 podman[255302]: 2025-10-02 19:35:26.771768001 +0000 UTC m=+0.119400504 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 19:35:26 compute-0 podman[255300]: 2025-10-02 19:35:26.789597803 +0000 UTC m=+0.149738267 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, io.buildah.version=1.33.7, release=1755695350, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_id=edpm, name=ubi9-minimal)
Oct 02 19:35:27 compute-0 nova_compute[194781]: 2025-10-02 19:35:27.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:27 compute-0 nova_compute[194781]: 2025-10-02 19:35:27.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:35:28 compute-0 nova_compute[194781]: 2025-10-02 19:35:28.036 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:29 compute-0 nova_compute[194781]: 2025-10-02 19:35:29.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:29 compute-0 nova_compute[194781]: 2025-10-02 19:35:29.032 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:29 compute-0 podman[209015]: time="2025-10-02T19:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:35:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:35:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5213 "" "Go-http-client/1.1"
Oct 02 19:35:30 compute-0 nova_compute[194781]: 2025-10-02 19:35:30.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:30 compute-0 nova_compute[194781]: 2025-10-02 19:35:30.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:31 compute-0 openstack_network_exporter[211160]: ERROR   19:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:35:31 compute-0 openstack_network_exporter[211160]: ERROR   19:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:35:31 compute-0 openstack_network_exporter[211160]: ERROR   19:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:35:31 compute-0 openstack_network_exporter[211160]: ERROR   19:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:35:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:35:31 compute-0 openstack_network_exporter[211160]: ERROR   19:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:35:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.083 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.084 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.084 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.085 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.179 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.236 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.238 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.295 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.296 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.355 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.356 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.439 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.881 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.882 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5125MB free_disk=72.47874450683594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.882 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:35:32 compute-0 nova_compute[194781]: 2025-10-02 19:35:32.883 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:35:33 compute-0 nova_compute[194781]: 2025-10-02 19:35:33.122 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:35:33 compute-0 nova_compute[194781]: 2025-10-02 19:35:33.123 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:35:33 compute-0 nova_compute[194781]: 2025-10-02 19:35:33.123 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:35:33 compute-0 nova_compute[194781]: 2025-10-02 19:35:33.206 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing inventories for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 19:35:33 compute-0 nova_compute[194781]: 2025-10-02 19:35:33.310 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating ProviderTree inventory for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 19:35:33 compute-0 nova_compute[194781]: 2025-10-02 19:35:33.311 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:35:33 compute-0 nova_compute[194781]: 2025-10-02 19:35:33.336 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing aggregate associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 19:35:33 compute-0 nova_compute[194781]: 2025-10-02 19:35:33.363 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing trait associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,HW_CPU_X86_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 19:35:33 compute-0 nova_compute[194781]: 2025-10-02 19:35:33.417 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:35:33 compute-0 nova_compute[194781]: 2025-10-02 19:35:33.440 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:35:33 compute-0 nova_compute[194781]: 2025-10-02 19:35:33.442 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:35:33 compute-0 nova_compute[194781]: 2025-10-02 19:35:33.442 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:35:33 compute-0 podman[255371]: 2025-10-02 19:35:33.693509513 +0000 UTC m=+0.065817474 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:35:33 compute-0 podman[255370]: 2025-10-02 19:35:33.742420399 +0000 UTC m=+0.109720018 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:35:35 compute-0 nova_compute[194781]: 2025-10-02 19:35:35.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:35 compute-0 nova_compute[194781]: 2025-10-02 19:35:35.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:37 compute-0 nova_compute[194781]: 2025-10-02 19:35:37.443 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:37 compute-0 nova_compute[194781]: 2025-10-02 19:35:37.444 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:35:37 compute-0 nova_compute[194781]: 2025-10-02 19:35:37.444 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:35:37 compute-0 nova_compute[194781]: 2025-10-02 19:35:37.643 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:35:37 compute-0 nova_compute[194781]: 2025-10-02 19:35:37.644 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:35:37 compute-0 nova_compute[194781]: 2025-10-02 19:35:37.644 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:35:37 compute-0 nova_compute[194781]: 2025-10-02 19:35:37.644 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:35:37 compute-0 podman[255413]: 2025-10-02 19:35:37.742893639 +0000 UTC m=+0.104596532 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:35:37 compute-0 podman[255414]: 2025-10-02 19:35:37.802914689 +0000 UTC m=+0.159225599 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:35:38 compute-0 nova_compute[194781]: 2025-10-02 19:35:38.984 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:35:39 compute-0 nova_compute[194781]: 2025-10-02 19:35:39.000 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:35:39 compute-0 nova_compute[194781]: 2025-10-02 19:35:39.001 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:35:39 compute-0 nova_compute[194781]: 2025-10-02 19:35:39.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:39 compute-0 nova_compute[194781]: 2025-10-02 19:35:39.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 19:35:40 compute-0 nova_compute[194781]: 2025-10-02 19:35:40.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:40 compute-0 nova_compute[194781]: 2025-10-02 19:35:40.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:45 compute-0 nova_compute[194781]: 2025-10-02 19:35:45.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:45 compute-0 podman[255455]: 2025-10-02 19:35:45.760240314 +0000 UTC m=+0.123791850 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:35:45 compute-0 nova_compute[194781]: 2025-10-02 19:35:45.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:35:47.474 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:35:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:35:47.475 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:35:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:35:47.476 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:35:48 compute-0 nova_compute[194781]: 2025-10-02 19:35:48.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:35:50 compute-0 nova_compute[194781]: 2025-10-02 19:35:50.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:50 compute-0 nova_compute[194781]: 2025-10-02 19:35:50.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:53 compute-0 podman[255478]: 2025-10-02 19:35:53.7456434 +0000 UTC m=+0.104701685 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:35:53 compute-0 podman[255479]: 2025-10-02 19:35:53.754622718 +0000 UTC m=+0.103854592 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:35:55 compute-0 nova_compute[194781]: 2025-10-02 19:35:55.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:55 compute-0 nova_compute[194781]: 2025-10-02 19:35:55.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:35:57 compute-0 unix_chkpwd[255519]: password check failed for user (root)
Oct 02 19:35:57 compute-0 sshd-session[255517]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:35:57 compute-0 podman[255520]: 2025-10-02 19:35:57.751378848 +0000 UTC m=+0.114686739 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=)
Oct 02 19:35:57 compute-0 podman[255522]: 2025-10-02 19:35:57.77863613 +0000 UTC m=+0.119679091 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:35:57 compute-0 podman[255521]: 2025-10-02 19:35:57.796530184 +0000 UTC m=+0.144876358 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=edpm, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, release-0.7.12=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:35:59 compute-0 sshd-session[255517]: Failed password for root from 91.224.92.108 port 63126 ssh2
Oct 02 19:35:59 compute-0 podman[209015]: time="2025-10-02T19:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:35:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:35:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5212 "" "Go-http-client/1.1"
Oct 02 19:36:00 compute-0 unix_chkpwd[255578]: password check failed for user (root)
Oct 02 19:36:00 compute-0 nova_compute[194781]: 2025-10-02 19:36:00.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:00 compute-0 nova_compute[194781]: 2025-10-02 19:36:00.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:01 compute-0 openstack_network_exporter[211160]: ERROR   19:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:36:01 compute-0 openstack_network_exporter[211160]: ERROR   19:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:36:01 compute-0 openstack_network_exporter[211160]: ERROR   19:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:36:01 compute-0 openstack_network_exporter[211160]: ERROR   19:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:36:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:36:01 compute-0 openstack_network_exporter[211160]: ERROR   19:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:36:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:36:02 compute-0 sshd-session[255517]: Failed password for root from 91.224.92.108 port 63126 ssh2
Oct 02 19:36:03 compute-0 unix_chkpwd[255579]: password check failed for user (root)
Oct 02 19:36:04 compute-0 podman[255580]: 2025-10-02 19:36:04.7647942 +0000 UTC m=+0.119719402 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:36:04 compute-0 podman[255581]: 2025-10-02 19:36:04.791418005 +0000 UTC m=+0.140711498 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 19:36:05 compute-0 sshd-session[255517]: Failed password for root from 91.224.92.108 port 63126 ssh2
Oct 02 19:36:05 compute-0 nova_compute[194781]: 2025-10-02 19:36:05.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:05 compute-0 nova_compute[194781]: 2025-10-02 19:36:05.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:06 compute-0 sshd-session[255517]: Received disconnect from 91.224.92.108 port 63126:11:  [preauth]
Oct 02 19:36:06 compute-0 sshd-session[255517]: Disconnected from authenticating user root 91.224.92.108 port 63126 [preauth]
Oct 02 19:36:06 compute-0 sshd-session[255517]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:36:07 compute-0 unix_chkpwd[255625]: password check failed for user (root)
Oct 02 19:36:07 compute-0 sshd-session[255623]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:36:08 compute-0 podman[255626]: 2025-10-02 19:36:08.738715066 +0000 UTC m=+0.098284664 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:36:08 compute-0 podman[255627]: 2025-10-02 19:36:08.818004537 +0000 UTC m=+0.184402096 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:36:09 compute-0 sshd-session[255623]: Failed password for root from 91.224.92.108 port 35298 ssh2
Oct 02 19:36:10 compute-0 unix_chkpwd[255666]: password check failed for user (root)
Oct 02 19:36:10 compute-0 nova_compute[194781]: 2025-10-02 19:36:10.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:10 compute-0 nova_compute[194781]: 2025-10-02 19:36:10.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:13 compute-0 sshd-session[255623]: Failed password for root from 91.224.92.108 port 35298 ssh2
Oct 02 19:36:13 compute-0 unix_chkpwd[255667]: password check failed for user (root)
Oct 02 19:36:15 compute-0 nova_compute[194781]: 2025-10-02 19:36:15.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:15 compute-0 sshd-session[255623]: Failed password for root from 91.224.92.108 port 35298 ssh2
Oct 02 19:36:15 compute-0 nova_compute[194781]: 2025-10-02 19:36:15.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:16 compute-0 podman[255668]: 2025-10-02 19:36:16.738437814 +0000 UTC m=+0.095311705 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:36:16 compute-0 sshd-session[255623]: Received disconnect from 91.224.92.108 port 35298:11:  [preauth]
Oct 02 19:36:16 compute-0 sshd-session[255623]: Disconnected from authenticating user root 91.224.92.108 port 35298 [preauth]
Oct 02 19:36:16 compute-0 sshd-session[255623]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:36:18 compute-0 unix_chkpwd[255693]: password check failed for user (root)
Oct 02 19:36:18 compute-0 sshd-session[255691]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:36:19 compute-0 sshd-session[255691]: Failed password for root from 91.224.92.108 port 30752 ssh2
Oct 02 19:36:20 compute-0 nova_compute[194781]: 2025-10-02 19:36:20.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:20 compute-0 nova_compute[194781]: 2025-10-02 19:36:20.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:21 compute-0 unix_chkpwd[255695]: password check failed for user (root)
Oct 02 19:36:23 compute-0 nova_compute[194781]: 2025-10-02 19:36:23.056 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:23 compute-0 sshd-session[255691]: Failed password for root from 91.224.92.108 port 30752 ssh2
Oct 02 19:36:24 compute-0 unix_chkpwd[255696]: password check failed for user (root)
Oct 02 19:36:24 compute-0 podman[255698]: 2025-10-02 19:36:24.742925159 +0000 UTC m=+0.105569987 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute)
Oct 02 19:36:24 compute-0 podman[255697]: 2025-10-02 19:36:24.751393153 +0000 UTC m=+0.116702072 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 19:36:25 compute-0 nova_compute[194781]: 2025-10-02 19:36:25.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:25 compute-0 nova_compute[194781]: 2025-10-02 19:36:25.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:25 compute-0 nova_compute[194781]: 2025-10-02 19:36:25.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:26 compute-0 sshd-session[255691]: Failed password for root from 91.224.92.108 port 30752 ssh2
Oct 02 19:36:27 compute-0 sshd-session[255691]: Received disconnect from 91.224.92.108 port 30752:11:  [preauth]
Oct 02 19:36:27 compute-0 sshd-session[255691]: Disconnected from authenticating user root 91.224.92.108 port 30752 [preauth]
Oct 02 19:36:27 compute-0 sshd-session[255691]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 02 19:36:28 compute-0 nova_compute[194781]: 2025-10-02 19:36:28.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:28 compute-0 nova_compute[194781]: 2025-10-02 19:36:28.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:28 compute-0 podman[255735]: 2025-10-02 19:36:28.762897282 +0000 UTC m=+0.118221743 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 19:36:28 compute-0 podman[255736]: 2025-10-02 19:36:28.771583432 +0000 UTC m=+0.117011940 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, distribution-scope=public, container_name=kepler, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:36:28 compute-0 podman[255737]: 2025-10-02 19:36:28.776546153 +0000 UTC m=+0.114150504 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Oct 02 19:36:29 compute-0 nova_compute[194781]: 2025-10-02 19:36:29.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:29 compute-0 nova_compute[194781]: 2025-10-02 19:36:29.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:36:29 compute-0 podman[209015]: time="2025-10-02T19:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:36:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:36:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5218 "" "Go-http-client/1.1"
Oct 02 19:36:30 compute-0 nova_compute[194781]: 2025-10-02 19:36:30.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:30 compute-0 nova_compute[194781]: 2025-10-02 19:36:30.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:31 compute-0 nova_compute[194781]: 2025-10-02 19:36:31.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:31 compute-0 nova_compute[194781]: 2025-10-02 19:36:31.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:31 compute-0 openstack_network_exporter[211160]: ERROR   19:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:36:31 compute-0 openstack_network_exporter[211160]: ERROR   19:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:36:31 compute-0 openstack_network_exporter[211160]: ERROR   19:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:36:31 compute-0 openstack_network_exporter[211160]: ERROR   19:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:36:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:36:31 compute-0 openstack_network_exporter[211160]: ERROR   19:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:36:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.069 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.102 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.103 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.103 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.104 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.213 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.309 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.309 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.369 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.370 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.461 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.462 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.525 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.865 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.866 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5128MB free_disk=72.47874069213867GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.867 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.867 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.945 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.945 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:36:32 compute-0 nova_compute[194781]: 2025-10-02 19:36:32.946 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:36:33 compute-0 nova_compute[194781]: 2025-10-02 19:36:33.027 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:36:33 compute-0 nova_compute[194781]: 2025-10-02 19:36:33.046 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:36:33 compute-0 nova_compute[194781]: 2025-10-02 19:36:33.049 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:36:33 compute-0 nova_compute[194781]: 2025-10-02 19:36:33.049 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:36:35 compute-0 nova_compute[194781]: 2025-10-02 19:36:35.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:35 compute-0 podman[255802]: 2025-10-02 19:36:35.756687256 +0000 UTC m=+0.116600619 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:36:35 compute-0 podman[255803]: 2025-10-02 19:36:35.80286225 +0000 UTC m=+0.154552185 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.build-date=20251001, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:36:35 compute-0 nova_compute[194781]: 2025-10-02 19:36:35.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:37 compute-0 nova_compute[194781]: 2025-10-02 19:36:37.013 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:36:37 compute-0 nova_compute[194781]: 2025-10-02 19:36:37.014 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:36:37 compute-0 nova_compute[194781]: 2025-10-02 19:36:37.014 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:36:37 compute-0 nova_compute[194781]: 2025-10-02 19:36:37.710 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:36:37 compute-0 nova_compute[194781]: 2025-10-02 19:36:37.711 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:36:37 compute-0 nova_compute[194781]: 2025-10-02 19:36:37.711 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:36:37 compute-0 nova_compute[194781]: 2025-10-02 19:36:37.711 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:36:39 compute-0 nova_compute[194781]: 2025-10-02 19:36:39.120 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:36:39 compute-0 nova_compute[194781]: 2025-10-02 19:36:39.180 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:36:39 compute-0 nova_compute[194781]: 2025-10-02 19:36:39.181 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:36:39 compute-0 podman[255841]: 2025-10-02 19:36:39.738526413 +0000 UTC m=+0.093139809 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:36:39 compute-0 podman[255842]: 2025-10-02 19:36:39.804738437 +0000 UTC m=+0.155334546 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:36:40 compute-0 nova_compute[194781]: 2025-10-02 19:36:40.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:40 compute-0 nova_compute[194781]: 2025-10-02 19:36:40.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:45 compute-0 nova_compute[194781]: 2025-10-02 19:36:45.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:45 compute-0 nova_compute[194781]: 2025-10-02 19:36:45.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:36:47.476 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:36:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:36:47.477 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:36:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:36:47.478 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:36:47 compute-0 podman[255882]: 2025-10-02 19:36:47.731041302 +0000 UTC m=+0.103116322 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:36:50 compute-0 nova_compute[194781]: 2025-10-02 19:36:50.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:50 compute-0 nova_compute[194781]: 2025-10-02 19:36:50.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:55 compute-0 nova_compute[194781]: 2025-10-02 19:36:55.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:55 compute-0 podman[255907]: 2025-10-02 19:36:55.729739095 +0000 UTC m=+0.093668192 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_id=edpm)
Oct 02 19:36:55 compute-0 podman[255908]: 2025-10-02 19:36:55.757448999 +0000 UTC m=+0.106069561 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_managed=true)
Oct 02 19:36:55 compute-0 nova_compute[194781]: 2025-10-02 19:36:55.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:36:59 compute-0 podman[255944]: 2025-10-02 19:36:59.710478574 +0000 UTC m=+0.085340041 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter)
Oct 02 19:36:59 compute-0 podman[255945]: 2025-10-02 19:36:59.72844633 +0000 UTC m=+0.099842595 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-type=git, architecture=x86_64, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, container_name=kepler, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Oct 02 19:36:59 compute-0 podman[255946]: 2025-10-02 19:36:59.743780717 +0000 UTC m=+0.109524943 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct 02 19:36:59 compute-0 podman[209015]: time="2025-10-02T19:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:36:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:36:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5215 "" "Go-http-client/1.1"
Oct 02 19:37:00 compute-0 nova_compute[194781]: 2025-10-02 19:37:00.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:00 compute-0 nova_compute[194781]: 2025-10-02 19:37:00.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:01 compute-0 openstack_network_exporter[211160]: ERROR   19:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:37:01 compute-0 openstack_network_exporter[211160]: ERROR   19:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:37:01 compute-0 openstack_network_exporter[211160]: ERROR   19:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:37:01 compute-0 openstack_network_exporter[211160]: ERROR   19:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:37:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:37:01 compute-0 openstack_network_exporter[211160]: ERROR   19:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:37:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:37:05 compute-0 nova_compute[194781]: 2025-10-02 19:37:05.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:05 compute-0 nova_compute[194781]: 2025-10-02 19:37:05.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:06 compute-0 podman[256003]: 2025-10-02 19:37:06.710457173 +0000 UTC m=+0.084399647 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:37:06 compute-0 podman[256004]: 2025-10-02 19:37:06.720734375 +0000 UTC m=+0.090692523 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:37:10 compute-0 nova_compute[194781]: 2025-10-02 19:37:10.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:10 compute-0 podman[256045]: 2025-10-02 19:37:10.723756184 +0000 UTC m=+0.093625421 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:37:10 compute-0 podman[256046]: 2025-10-02 19:37:10.762886681 +0000 UTC m=+0.140615416 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 19:37:10 compute-0 nova_compute[194781]: 2025-10-02 19:37:10.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.944 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.945 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba7114ce0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.956 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.963 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.964 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.964 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.964 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:12.965 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:37:12.964439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.003 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 47040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.004 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.005 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.005 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.005 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.006 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:37:13.005615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:37:13.007519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.014 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.015 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.016 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.016 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:37:13.015937) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.017 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.018 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.019 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:37:13.017964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.020 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.021 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.022 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.023 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:37:13.020098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:37:13.022082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.024 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:37:13.024853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.116 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.117 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.117 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.118 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.118 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.119 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.119 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.119 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.120 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.121 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:37:13.119365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.121 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.122 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.123 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:37:13.121613) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:37:13.123631) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.166 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.167 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 rsyslogd[243731]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.167 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.168 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.169 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.169 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.169 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.169 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.169 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.170 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.170 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.171 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.172 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:37:13.169633) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.171 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.172 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.172 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.172 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.173 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.173 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:37:13.173282) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.173 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.174 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.174 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.175 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.176 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.176 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.176 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.177 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.177 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:37:13.176937) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.178 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.178 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.180 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:37:13.179834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.181 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.181 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.182 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.182 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.182 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.183 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.183 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.183 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.184 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.185 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.186 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.186 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:37:13.183579) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.187 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.187 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.187 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.188 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.188 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.189 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.190 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.190 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.191 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:37:13.187803) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:37:13.191845) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.192 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.193 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.193 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.195 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:37:13.195788) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.196 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.197 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.197 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.198 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.199 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.199 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.199 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.200 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.201 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:37:13.199636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.201 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:37:13.202275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.203 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.204 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.204 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.204 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.206 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.206 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.206 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:37:13.204516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.207 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.208 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:37:13.207850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.208 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.209 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.209 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.209 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.209 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.209 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.210 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.210 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:37:13.210142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 rsyslogd[243731]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:37:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:37:15 compute-0 nova_compute[194781]: 2025-10-02 19:37:15.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:15 compute-0 nova_compute[194781]: 2025-10-02 19:37:15.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:18 compute-0 podman[256090]: 2025-10-02 19:37:18.750662807 +0000 UTC m=+0.110193770 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:37:20 compute-0 nova_compute[194781]: 2025-10-02 19:37:20.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:20 compute-0 nova_compute[194781]: 2025-10-02 19:37:20.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:25 compute-0 nova_compute[194781]: 2025-10-02 19:37:25.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:25 compute-0 nova_compute[194781]: 2025-10-02 19:37:25.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:25 compute-0 nova_compute[194781]: 2025-10-02 19:37:25.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:26 compute-0 nova_compute[194781]: 2025-10-02 19:37:26.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:26 compute-0 podman[256112]: 2025-10-02 19:37:26.723327618 +0000 UTC m=+0.093479308 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi)
Oct 02 19:37:26 compute-0 podman[256113]: 2025-10-02 19:37:26.772214773 +0000 UTC m=+0.122365133 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:37:29 compute-0 nova_compute[194781]: 2025-10-02 19:37:29.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:29 compute-0 nova_compute[194781]: 2025-10-02 19:37:29.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:29 compute-0 nova_compute[194781]: 2025-10-02 19:37:29.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:37:29 compute-0 podman[209015]: time="2025-10-02T19:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:37:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:37:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5222 "" "Go-http-client/1.1"
Oct 02 19:37:30 compute-0 nova_compute[194781]: 2025-10-02 19:37:30.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:30 compute-0 nova_compute[194781]: 2025-10-02 19:37:30.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:30 compute-0 podman[256157]: 2025-10-02 19:37:30.733249369 +0000 UTC m=+0.092210303 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:37:30 compute-0 podman[256152]: 2025-10-02 19:37:30.738449987 +0000 UTC m=+0.091012742 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_id=edpm, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible)
Oct 02 19:37:30 compute-0 podman[256151]: 2025-10-02 19:37:30.740147612 +0000 UTC m=+0.109798829 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 19:37:30 compute-0 nova_compute[194781]: 2025-10-02 19:37:30.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:31 compute-0 openstack_network_exporter[211160]: ERROR   19:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:37:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:37:31 compute-0 openstack_network_exporter[211160]: ERROR   19:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:37:31 compute-0 openstack_network_exporter[211160]: ERROR   19:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:37:31 compute-0 openstack_network_exporter[211160]: ERROR   19:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:37:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:37:31 compute-0 openstack_network_exporter[211160]: ERROR   19:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.073 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.074 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.074 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.075 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.162 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.262 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.264 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.327 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.329 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.419 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.421 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.478 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.916 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.918 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5110MB free_disk=72.47976684570312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.919 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:37:32 compute-0 nova_compute[194781]: 2025-10-02 19:37:32.919 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:37:33 compute-0 nova_compute[194781]: 2025-10-02 19:37:33.029 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:37:33 compute-0 nova_compute[194781]: 2025-10-02 19:37:33.030 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:37:33 compute-0 nova_compute[194781]: 2025-10-02 19:37:33.031 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:37:33 compute-0 nova_compute[194781]: 2025-10-02 19:37:33.109 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:37:33 compute-0 nova_compute[194781]: 2025-10-02 19:37:33.140 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:37:33 compute-0 nova_compute[194781]: 2025-10-02 19:37:33.141 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:37:33 compute-0 nova_compute[194781]: 2025-10-02 19:37:33.142 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:37:35 compute-0 nova_compute[194781]: 2025-10-02 19:37:35.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:35 compute-0 nova_compute[194781]: 2025-10-02 19:37:35.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:37 compute-0 podman[256221]: 2025-10-02 19:37:37.711547133 +0000 UTC m=+0.091177946 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:37:37 compute-0 podman[256222]: 2025-10-02 19:37:37.724167687 +0000 UTC m=+0.101358915 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:37:38 compute-0 nova_compute[194781]: 2025-10-02 19:37:38.143 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:37:38 compute-0 nova_compute[194781]: 2025-10-02 19:37:38.144 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:37:38 compute-0 nova_compute[194781]: 2025-10-02 19:37:38.144 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:37:38 compute-0 nova_compute[194781]: 2025-10-02 19:37:38.764 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:37:38 compute-0 nova_compute[194781]: 2025-10-02 19:37:38.764 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:37:38 compute-0 nova_compute[194781]: 2025-10-02 19:37:38.765 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:37:38 compute-0 nova_compute[194781]: 2025-10-02 19:37:38.766 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:37:40 compute-0 nova_compute[194781]: 2025-10-02 19:37:40.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:40 compute-0 nova_compute[194781]: 2025-10-02 19:37:40.801 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:37:40 compute-0 nova_compute[194781]: 2025-10-02 19:37:40.828 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:37:40 compute-0 nova_compute[194781]: 2025-10-02 19:37:40.828 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:37:40 compute-0 nova_compute[194781]: 2025-10-02 19:37:40.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:41 compute-0 podman[256264]: 2025-10-02 19:37:41.724612109 +0000 UTC m=+0.092486351 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:37:41 compute-0 podman[256265]: 2025-10-02 19:37:41.768671516 +0000 UTC m=+0.134561116 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 19:37:45 compute-0 nova_compute[194781]: 2025-10-02 19:37:45.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:45 compute-0 nova_compute[194781]: 2025-10-02 19:37:45.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:37:47.478 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:37:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:37:47.478 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:37:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:37:47.479 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:37:49 compute-0 podman[256309]: 2025-10-02 19:37:49.76353116 +0000 UTC m=+0.126177534 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:37:50 compute-0 nova_compute[194781]: 2025-10-02 19:37:50.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:50 compute-0 nova_compute[194781]: 2025-10-02 19:37:50.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:55 compute-0 nova_compute[194781]: 2025-10-02 19:37:55.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:55 compute-0 nova_compute[194781]: 2025-10-02 19:37:55.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:37:57 compute-0 podman[256332]: 2025-10-02 19:37:57.711406993 +0000 UTC m=+0.087282723 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:37:57 compute-0 podman[256333]: 2025-10-02 19:37:57.727974302 +0000 UTC m=+0.093923379 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:37:59 compute-0 podman[209015]: time="2025-10-02T19:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:37:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:37:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5217 "" "Go-http-client/1.1"
Oct 02 19:38:00 compute-0 nova_compute[194781]: 2025-10-02 19:38:00.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:00 compute-0 nova_compute[194781]: 2025-10-02 19:38:00.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:01 compute-0 openstack_network_exporter[211160]: ERROR   19:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:38:01 compute-0 openstack_network_exporter[211160]: ERROR   19:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:38:01 compute-0 openstack_network_exporter[211160]: ERROR   19:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:38:01 compute-0 openstack_network_exporter[211160]: ERROR   19:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:38:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:38:01 compute-0 openstack_network_exporter[211160]: ERROR   19:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:38:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:38:01 compute-0 podman[256371]: 2025-10-02 19:38:01.703115069 +0000 UTC m=+0.073125968 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:38:01 compute-0 podman[256370]: 2025-10-02 19:38:01.709084827 +0000 UTC m=+0.079075736 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc.)
Oct 02 19:38:01 compute-0 podman[256369]: 2025-10-02 19:38:01.713007731 +0000 UTC m=+0.086574474 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Oct 02 19:38:05 compute-0 nova_compute[194781]: 2025-10-02 19:38:05.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:05 compute-0 nova_compute[194781]: 2025-10-02 19:38:05.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:08 compute-0 podman[256425]: 2025-10-02 19:38:08.732359955 +0000 UTC m=+0.100378670 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 19:38:08 compute-0 podman[256424]: 2025-10-02 19:38:08.73669906 +0000 UTC m=+0.099839066 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:38:10 compute-0 nova_compute[194781]: 2025-10-02 19:38:10.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:10 compute-0 nova_compute[194781]: 2025-10-02 19:38:10.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:12 compute-0 podman[256464]: 2025-10-02 19:38:12.750441814 +0000 UTC m=+0.124141159 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:38:12 compute-0 podman[256465]: 2025-10-02 19:38:12.792445457 +0000 UTC m=+0.162755243 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 19:38:15 compute-0 nova_compute[194781]: 2025-10-02 19:38:15.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:16 compute-0 nova_compute[194781]: 2025-10-02 19:38:16.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:20 compute-0 podman[256508]: 2025-10-02 19:38:20.740782214 +0000 UTC m=+0.111772072 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:38:20 compute-0 nova_compute[194781]: 2025-10-02 19:38:20.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:21 compute-0 nova_compute[194781]: 2025-10-02 19:38:21.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:25 compute-0 nova_compute[194781]: 2025-10-02 19:38:25.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:25 compute-0 nova_compute[194781]: 2025-10-02 19:38:25.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:26 compute-0 nova_compute[194781]: 2025-10-02 19:38:26.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:28 compute-0 nova_compute[194781]: 2025-10-02 19:38:28.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:28 compute-0 podman[256532]: 2025-10-02 19:38:28.74735095 +0000 UTC m=+0.109549143 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Oct 02 19:38:28 compute-0 podman[256531]: 2025-10-02 19:38:28.74736648 +0000 UTC m=+0.108972547 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:38:29 compute-0 podman[209015]: time="2025-10-02T19:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:38:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:38:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5211 "" "Go-http-client/1.1"
Oct 02 19:38:30 compute-0 nova_compute[194781]: 2025-10-02 19:38:30.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:30 compute-0 nova_compute[194781]: 2025-10-02 19:38:30.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:30 compute-0 nova_compute[194781]: 2025-10-02 19:38:30.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:38:30 compute-0 nova_compute[194781]: 2025-10-02 19:38:30.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:31 compute-0 nova_compute[194781]: 2025-10-02 19:38:31.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:31 compute-0 openstack_network_exporter[211160]: ERROR   19:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:38:31 compute-0 openstack_network_exporter[211160]: ERROR   19:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:38:31 compute-0 openstack_network_exporter[211160]: ERROR   19:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:38:31 compute-0 openstack_network_exporter[211160]: ERROR   19:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:38:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:38:31 compute-0 openstack_network_exporter[211160]: ERROR   19:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:38:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:38:32 compute-0 nova_compute[194781]: 2025-10-02 19:38:32.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:32 compute-0 podman[256571]: 2025-10-02 19:38:32.725092579 +0000 UTC m=+0.091066424 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vcs-type=git, name=ubi9, distribution-scope=public, container_name=kepler, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Oct 02 19:38:32 compute-0 podman[256570]: 2025-10-02 19:38:32.768973931 +0000 UTC m=+0.126843961 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=edpm_ansible, io.buildah.version=1.33.7, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Oct 02 19:38:32 compute-0 podman[256572]: 2025-10-02 19:38:32.778849113 +0000 UTC m=+0.133004135 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.030 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.072 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.072 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.073 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.073 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.166 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.230 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.231 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.287 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.288 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.383 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.384 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.453 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.876 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.878 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5127MB free_disk=72.47976684570312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.878 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.878 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.965 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.966 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:38:34 compute-0 nova_compute[194781]: 2025-10-02 19:38:34.966 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:38:35 compute-0 nova_compute[194781]: 2025-10-02 19:38:35.019 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:38:35 compute-0 nova_compute[194781]: 2025-10-02 19:38:35.035 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:38:35 compute-0 nova_compute[194781]: 2025-10-02 19:38:35.037 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:38:35 compute-0 nova_compute[194781]: 2025-10-02 19:38:35.037 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:38:35 compute-0 nova_compute[194781]: 2025-10-02 19:38:35.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:36 compute-0 nova_compute[194781]: 2025-10-02 19:38:36.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:36 compute-0 nova_compute[194781]: 2025-10-02 19:38:36.032 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:39 compute-0 nova_compute[194781]: 2025-10-02 19:38:39.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:38:39 compute-0 nova_compute[194781]: 2025-10-02 19:38:39.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:38:39 compute-0 nova_compute[194781]: 2025-10-02 19:38:39.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:38:39 compute-0 podman[256639]: 2025-10-02 19:38:39.7378692 +0000 UTC m=+0.095784838 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:38:39 compute-0 nova_compute[194781]: 2025-10-02 19:38:39.760 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:38:39 compute-0 nova_compute[194781]: 2025-10-02 19:38:39.760 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:38:39 compute-0 nova_compute[194781]: 2025-10-02 19:38:39.761 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:38:39 compute-0 nova_compute[194781]: 2025-10-02 19:38:39.761 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:38:39 compute-0 podman[256640]: 2025-10-02 19:38:39.768849781 +0000 UTC m=+0.132344767 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:38:40 compute-0 nova_compute[194781]: 2025-10-02 19:38:40.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:41 compute-0 nova_compute[194781]: 2025-10-02 19:38:41.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:41 compute-0 nova_compute[194781]: 2025-10-02 19:38:41.414 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:38:41 compute-0 nova_compute[194781]: 2025-10-02 19:38:41.432 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:38:41 compute-0 nova_compute[194781]: 2025-10-02 19:38:41.433 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:38:43 compute-0 podman[256679]: 2025-10-02 19:38:43.766975026 +0000 UTC m=+0.134404832 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:38:43 compute-0 podman[256680]: 2025-10-02 19:38:43.787770947 +0000 UTC m=+0.150705404 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 19:38:45 compute-0 nova_compute[194781]: 2025-10-02 19:38:45.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:46 compute-0 nova_compute[194781]: 2025-10-02 19:38:46.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:38:47.483 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:38:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:38:47.486 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:38:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:38:47.487 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:38:50 compute-0 nova_compute[194781]: 2025-10-02 19:38:50.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:51 compute-0 nova_compute[194781]: 2025-10-02 19:38:51.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:51 compute-0 podman[256726]: 2025-10-02 19:38:51.751813123 +0000 UTC m=+0.107256922 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:38:55 compute-0 nova_compute[194781]: 2025-10-02 19:38:55.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:56 compute-0 nova_compute[194781]: 2025-10-02 19:38:56.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:38:59 compute-0 podman[256749]: 2025-10-02 19:38:59.700098058 +0000 UTC m=+0.076546085 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:38:59 compute-0 podman[256750]: 2025-10-02 19:38:59.707435103 +0000 UTC m=+0.080831129 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_id=edpm)
Oct 02 19:38:59 compute-0 podman[209015]: time="2025-10-02T19:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:38:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:38:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5210 "" "Go-http-client/1.1"
Oct 02 19:39:00 compute-0 nova_compute[194781]: 2025-10-02 19:39:00.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:01 compute-0 nova_compute[194781]: 2025-10-02 19:39:01.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:01 compute-0 openstack_network_exporter[211160]: ERROR   19:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:39:01 compute-0 openstack_network_exporter[211160]: ERROR   19:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:39:01 compute-0 openstack_network_exporter[211160]: ERROR   19:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:39:01 compute-0 openstack_network_exporter[211160]: ERROR   19:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:39:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:39:01 compute-0 openstack_network_exporter[211160]: ERROR   19:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:39:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:39:03 compute-0 podman[256782]: 2025-10-02 19:39:03.721264053 +0000 UTC m=+0.096177857 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, version=9.6, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-type=git, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 19:39:03 compute-0 podman[256783]: 2025-10-02 19:39:03.727497648 +0000 UTC m=+0.096254629 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9, container_name=kepler, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=edpm, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, version=9.4)
Oct 02 19:39:03 compute-0 podman[256784]: 2025-10-02 19:39:03.74975351 +0000 UTC m=+0.106918043 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:39:05 compute-0 nova_compute[194781]: 2025-10-02 19:39:05.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:06 compute-0 nova_compute[194781]: 2025-10-02 19:39:06.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:10 compute-0 podman[256841]: 2025-10-02 19:39:10.759063068 +0000 UTC m=+0.112711716 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:39:10 compute-0 podman[256840]: 2025-10-02 19:39:10.769991179 +0000 UTC m=+0.132138673 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:39:10 compute-0 nova_compute[194781]: 2025-10-02 19:39:10.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:11 compute-0 nova_compute[194781]: 2025-10-02 19:39:11.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.945 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.946 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fcafb60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.954 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.958 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.959 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.959 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.959 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.960 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:39:12.959520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.993 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 48770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.994 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.995 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.995 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.995 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.995 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.996 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:39:12.995632) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.997 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.997 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.997 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.997 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:12.998 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:39:12.997731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.004 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.006 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.007 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.009 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:39:13.007624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.010 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:39:13.010159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.011 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.013 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:39:13.012749) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.014 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.016 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.018 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.019 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:39:13.015785) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:39:13.020507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.082 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.083 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.084 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.085 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.086 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:39:13.086090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.087 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.088 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.088 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.088 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.089 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.089 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:39:13.088562) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:39:13.090843) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.121 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.122 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.122 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.123 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.124 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.124 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.125 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.126 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.127 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.128 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.129 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.130 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.130 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.131 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.131 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.132 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:39:13.125380) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.133 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.133 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.134 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:39:13.130436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.135 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.135 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.135 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.136 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.136 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.137 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:39:13.134405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.137 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.137 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:39:13.136498) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.137 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.138 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.139 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.139 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.139 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.139 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.140 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.140 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.140 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:39:13.140012) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.141 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.141 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.142 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.143 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.143 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.144 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.144 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.144 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.145 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:39:13.144398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.145 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.146 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.147 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.148 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.148 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.148 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.149 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.149 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.150 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.151 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.151 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.152 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:39:13.148695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.152 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.152 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.152 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.152 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.153 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.153 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:39:13.152893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.154 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.154 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.156 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.157 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.157 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.158 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.158 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:39:13.157731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.158 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.159 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.159 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.159 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.159 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.160 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.160 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:39:13.159740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.161 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.162 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.162 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.162 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.162 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.162 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.163 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.163 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.163 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.163 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.164 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.164 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.164 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.166 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:39:13.161516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.167 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:39:13.162673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.167 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:39:13.164243) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.167 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.167 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.167 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.167 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.167 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.167 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.168 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.168 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.168 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.168 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.168 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.168 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.168 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:39:13.168 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:39:14 compute-0 podman[256881]: 2025-10-02 19:39:14.75977877 +0000 UTC m=+0.119719932 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:39:14 compute-0 podman[256882]: 2025-10-02 19:39:14.768812 +0000 UTC m=+0.129878462 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:39:15 compute-0 nova_compute[194781]: 2025-10-02 19:39:15.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:16 compute-0 nova_compute[194781]: 2025-10-02 19:39:16.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:20 compute-0 nova_compute[194781]: 2025-10-02 19:39:20.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:21 compute-0 nova_compute[194781]: 2025-10-02 19:39:21.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:22 compute-0 podman[256925]: 2025-10-02 19:39:22.757816186 +0000 UTC m=+0.117404201 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:39:25 compute-0 nova_compute[194781]: 2025-10-02 19:39:25.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:26 compute-0 nova_compute[194781]: 2025-10-02 19:39:26.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:26 compute-0 nova_compute[194781]: 2025-10-02 19:39:26.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:29 compute-0 nova_compute[194781]: 2025-10-02 19:39:29.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:29 compute-0 podman[209015]: time="2025-10-02T19:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:39:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:39:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5217 "" "Go-http-client/1.1"
Oct 02 19:39:30 compute-0 podman[256949]: 2025-10-02 19:39:30.73783936 +0000 UTC m=+0.105961737 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Oct 02 19:39:30 compute-0 podman[256950]: 2025-10-02 19:39:30.750506537 +0000 UTC m=+0.102341131 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 02 19:39:30 compute-0 nova_compute[194781]: 2025-10-02 19:39:30.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:31 compute-0 nova_compute[194781]: 2025-10-02 19:39:31.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:31 compute-0 nova_compute[194781]: 2025-10-02 19:39:31.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:31 compute-0 nova_compute[194781]: 2025-10-02 19:39:31.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:39:31 compute-0 nova_compute[194781]: 2025-10-02 19:39:31.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:31 compute-0 openstack_network_exporter[211160]: ERROR   19:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:39:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:39:31 compute-0 openstack_network_exporter[211160]: ERROR   19:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:39:31 compute-0 openstack_network_exporter[211160]: ERROR   19:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:39:31 compute-0 openstack_network_exporter[211160]: ERROR   19:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:39:31 compute-0 openstack_network_exporter[211160]: ERROR   19:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:39:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:39:32 compute-0 nova_compute[194781]: 2025-10-02 19:39:32.036 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.030 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.032 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.096 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.097 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.098 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.099 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.188 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.253 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.254 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.312 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.314 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.406 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.408 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.480 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:39:34 compute-0 podman[256999]: 2025-10-02 19:39:34.744651535 +0000 UTC m=+0.107349774 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=edpm, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Oct 02 19:39:34 compute-0 podman[256998]: 2025-10-02 19:39:34.747361327 +0000 UTC m=+0.103737728 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:39:34 compute-0 podman[257000]: 2025-10-02 19:39:34.762623142 +0000 UTC m=+0.106157922 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.892 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.894 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5129MB free_disk=72.47976684570312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.894 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.894 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.976 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.977 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:39:34 compute-0 nova_compute[194781]: 2025-10-02 19:39:34.977 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:39:35 compute-0 nova_compute[194781]: 2025-10-02 19:39:35.028 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:39:35 compute-0 nova_compute[194781]: 2025-10-02 19:39:35.044 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:39:35 compute-0 nova_compute[194781]: 2025-10-02 19:39:35.046 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:39:35 compute-0 nova_compute[194781]: 2025-10-02 19:39:35.046 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:39:35 compute-0 nova_compute[194781]: 2025-10-02 19:39:35.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:36 compute-0 nova_compute[194781]: 2025-10-02 19:39:36.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:37 compute-0 nova_compute[194781]: 2025-10-02 19:39:37.047 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:40 compute-0 nova_compute[194781]: 2025-10-02 19:39:40.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:41 compute-0 nova_compute[194781]: 2025-10-02 19:39:41.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:39:41 compute-0 nova_compute[194781]: 2025-10-02 19:39:41.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:39:41 compute-0 nova_compute[194781]: 2025-10-02 19:39:41.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:39:41 compute-0 nova_compute[194781]: 2025-10-02 19:39:41.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:41 compute-0 nova_compute[194781]: 2025-10-02 19:39:41.275 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:39:41 compute-0 nova_compute[194781]: 2025-10-02 19:39:41.275 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:39:41 compute-0 nova_compute[194781]: 2025-10-02 19:39:41.276 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:39:41 compute-0 nova_compute[194781]: 2025-10-02 19:39:41.277 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:39:41 compute-0 podman[257057]: 2025-10-02 19:39:41.713432058 +0000 UTC m=+0.083634853 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:39:41 compute-0 podman[257058]: 2025-10-02 19:39:41.764924806 +0000 UTC m=+0.129292156 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:39:42 compute-0 nova_compute[194781]: 2025-10-02 19:39:42.836 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:39:42 compute-0 nova_compute[194781]: 2025-10-02 19:39:42.859 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:39:42 compute-0 nova_compute[194781]: 2025-10-02 19:39:42.860 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:39:45 compute-0 podman[257099]: 2025-10-02 19:39:45.743263254 +0000 UTC m=+0.107461256 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:39:45 compute-0 nova_compute[194781]: 2025-10-02 19:39:45.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:45 compute-0 podman[257100]: 2025-10-02 19:39:45.843449857 +0000 UTC m=+0.195366773 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:39:46 compute-0 nova_compute[194781]: 2025-10-02 19:39:46.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:39:47.484 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:39:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:39:47.485 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:39:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:39:47.485 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:39:50 compute-0 nova_compute[194781]: 2025-10-02 19:39:50.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:51 compute-0 nova_compute[194781]: 2025-10-02 19:39:51.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:53 compute-0 podman[257144]: 2025-10-02 19:39:53.710850897 +0000 UTC m=+0.082561545 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:39:55 compute-0 nova_compute[194781]: 2025-10-02 19:39:55.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:56 compute-0 nova_compute[194781]: 2025-10-02 19:39:56.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:39:59 compute-0 podman[209015]: time="2025-10-02T19:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:39:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:39:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5210 "" "Go-http-client/1.1"
Oct 02 19:40:00 compute-0 nova_compute[194781]: 2025-10-02 19:40:00.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:01 compute-0 nova_compute[194781]: 2025-10-02 19:40:01.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:01 compute-0 openstack_network_exporter[211160]: ERROR   19:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:40:01 compute-0 openstack_network_exporter[211160]: ERROR   19:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:40:01 compute-0 openstack_network_exporter[211160]: ERROR   19:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:40:01 compute-0 openstack_network_exporter[211160]: ERROR   19:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:40:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:40:01 compute-0 openstack_network_exporter[211160]: ERROR   19:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:40:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:40:01 compute-0 podman[257169]: 2025-10-02 19:40:01.708557231 +0000 UTC m=+0.082161814 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_id=edpm)
Oct 02 19:40:01 compute-0 podman[257168]: 2025-10-02 19:40:01.718845585 +0000 UTC m=+0.094836762 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 19:40:05 compute-0 podman[257204]: 2025-10-02 19:40:05.760692551 +0000 UTC m=+0.123725669 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm)
Oct 02 19:40:05 compute-0 podman[257206]: 2025-10-02 19:40:05.761457821 +0000 UTC m=+0.114110493 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:40:05 compute-0 podman[257205]: 2025-10-02 19:40:05.776944552 +0000 UTC m=+0.135893332 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, config_id=edpm, vendor=Red Hat, Inc., distribution-scope=public, container_name=kepler)
Oct 02 19:40:05 compute-0 nova_compute[194781]: 2025-10-02 19:40:05.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:06 compute-0 nova_compute[194781]: 2025-10-02 19:40:06.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:10 compute-0 nova_compute[194781]: 2025-10-02 19:40:10.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:11 compute-0 nova_compute[194781]: 2025-10-02 19:40:11.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:12 compute-0 podman[257260]: 2025-10-02 19:40:12.707407565 +0000 UTC m=+0.077145271 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:40:12 compute-0 podman[257261]: 2025-10-02 19:40:12.74673274 +0000 UTC m=+0.112145621 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 19:40:15 compute-0 nova_compute[194781]: 2025-10-02 19:40:15.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:16 compute-0 nova_compute[194781]: 2025-10-02 19:40:16.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:16 compute-0 podman[257304]: 2025-10-02 19:40:16.705882727 +0000 UTC m=+0.083443778 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:40:16 compute-0 podman[257305]: 2025-10-02 19:40:16.777549822 +0000 UTC m=+0.141438320 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:40:20 compute-0 nova_compute[194781]: 2025-10-02 19:40:20.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:21 compute-0 nova_compute[194781]: 2025-10-02 19:40:21.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:24 compute-0 podman[257346]: 2025-10-02 19:40:24.735084528 +0000 UTC m=+0.111097453 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:40:25 compute-0 nova_compute[194781]: 2025-10-02 19:40:25.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:26 compute-0 nova_compute[194781]: 2025-10-02 19:40:26.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:26 compute-0 nova_compute[194781]: 2025-10-02 19:40:26.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:29 compute-0 podman[209015]: time="2025-10-02T19:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:40:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:40:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5222 "" "Go-http-client/1.1"
Oct 02 19:40:30 compute-0 nova_compute[194781]: 2025-10-02 19:40:30.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:31 compute-0 nova_compute[194781]: 2025-10-02 19:40:31.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:31 compute-0 nova_compute[194781]: 2025-10-02 19:40:31.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:31 compute-0 nova_compute[194781]: 2025-10-02 19:40:31.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:31 compute-0 nova_compute[194781]: 2025-10-02 19:40:31.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:40:31 compute-0 nova_compute[194781]: 2025-10-02 19:40:31.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:31 compute-0 openstack_network_exporter[211160]: ERROR   19:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:40:31 compute-0 openstack_network_exporter[211160]: ERROR   19:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:40:31 compute-0 openstack_network_exporter[211160]: ERROR   19:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:40:31 compute-0 openstack_network_exporter[211160]: ERROR   19:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:40:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:40:31 compute-0 openstack_network_exporter[211160]: ERROR   19:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:40:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:40:32 compute-0 nova_compute[194781]: 2025-10-02 19:40:32.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:32 compute-0 podman[257370]: 2025-10-02 19:40:32.719422287 +0000 UTC m=+0.086140280 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Oct 02 19:40:32 compute-0 podman[257369]: 2025-10-02 19:40:32.729122175 +0000 UTC m=+0.089604492 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:40:33 compute-0 nova_compute[194781]: 2025-10-02 19:40:33.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:33 compute-0 nova_compute[194781]: 2025-10-02 19:40:33.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 19:40:33 compute-0 nova_compute[194781]: 2025-10-02 19:40:33.055 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.051 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.052 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.084 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.084 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.084 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.084 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.185 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.282 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.283 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.377 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.379 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.479 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.480 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.586 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:40:35 compute-0 nova_compute[194781]: 2025-10-02 19:40:35.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.078 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.080 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5129MB free_disk=72.47976684570312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.080 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.081 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.255 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.256 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.258 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.332 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing inventories for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.395 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating ProviderTree inventory for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.396 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.424 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing aggregate associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.444 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing trait associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,HW_CPU_X86_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.505 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.520 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.521 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:40:36 compute-0 nova_compute[194781]: 2025-10-02 19:40:36.521 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:40:36 compute-0 podman[257419]: 2025-10-02 19:40:36.716035242 +0000 UTC m=+0.083975893 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, release=1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:40:36 compute-0 podman[257418]: 2025-10-02 19:40:36.720225303 +0000 UTC m=+0.085717699 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 02 19:40:36 compute-0 podman[257420]: 2025-10-02 19:40:36.75359039 +0000 UTC m=+0.114731680 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:40:37 compute-0 nova_compute[194781]: 2025-10-02 19:40:37.503 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:40 compute-0 nova_compute[194781]: 2025-10-02 19:40:40.031 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:40 compute-0 nova_compute[194781]: 2025-10-02 19:40:40.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:41 compute-0 nova_compute[194781]: 2025-10-02 19:40:41.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:43 compute-0 nova_compute[194781]: 2025-10-02 19:40:43.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:43 compute-0 nova_compute[194781]: 2025-10-02 19:40:43.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:40:43 compute-0 nova_compute[194781]: 2025-10-02 19:40:43.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:40:43 compute-0 podman[257479]: 2025-10-02 19:40:43.722611298 +0000 UTC m=+0.087136986 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:40:43 compute-0 podman[257480]: 2025-10-02 19:40:43.740881724 +0000 UTC m=+0.108626598 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 19:40:43 compute-0 nova_compute[194781]: 2025-10-02 19:40:43.837 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:40:43 compute-0 nova_compute[194781]: 2025-10-02 19:40:43.838 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:40:43 compute-0 nova_compute[194781]: 2025-10-02 19:40:43.838 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:40:43 compute-0 nova_compute[194781]: 2025-10-02 19:40:43.839 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:40:45 compute-0 nova_compute[194781]: 2025-10-02 19:40:45.436 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:40:45 compute-0 nova_compute[194781]: 2025-10-02 19:40:45.459 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:40:45 compute-0 nova_compute[194781]: 2025-10-02 19:40:45.460 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:40:45 compute-0 nova_compute[194781]: 2025-10-02 19:40:45.461 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:45 compute-0 nova_compute[194781]: 2025-10-02 19:40:45.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:45 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:40:45.656 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:40:45 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:40:45.659 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:40:45 compute-0 nova_compute[194781]: 2025-10-02 19:40:45.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:46 compute-0 nova_compute[194781]: 2025-10-02 19:40:46.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:40:47.485 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:40:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:40:47.486 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:40:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:40:47.487 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:40:47 compute-0 podman[257522]: 2025-10-02 19:40:47.759590704 +0000 UTC m=+0.132432220 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 19:40:47 compute-0 podman[257523]: 2025-10-02 19:40:47.819524477 +0000 UTC m=+0.176753608 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:40:48 compute-0 nova_compute[194781]: 2025-10-02 19:40:48.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:48 compute-0 nova_compute[194781]: 2025-10-02 19:40:48.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 19:40:50 compute-0 nova_compute[194781]: 2025-10-02 19:40:50.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:51 compute-0 nova_compute[194781]: 2025-10-02 19:40:51.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:52 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:40:52.662 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:40:53 compute-0 nova_compute[194781]: 2025-10-02 19:40:53.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:40:55 compute-0 podman[257567]: 2025-10-02 19:40:55.761794567 +0000 UTC m=+0.125642820 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:40:55 compute-0 nova_compute[194781]: 2025-10-02 19:40:55.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:56 compute-0 nova_compute[194781]: 2025-10-02 19:40:56.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:40:59 compute-0 podman[209015]: time="2025-10-02T19:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:40:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:40:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5215 "" "Go-http-client/1.1"
Oct 02 19:41:00 compute-0 nova_compute[194781]: 2025-10-02 19:41:00.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:01 compute-0 nova_compute[194781]: 2025-10-02 19:41:01.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:01 compute-0 openstack_network_exporter[211160]: ERROR   19:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:41:01 compute-0 openstack_network_exporter[211160]: ERROR   19:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:41:01 compute-0 openstack_network_exporter[211160]: ERROR   19:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:41:01 compute-0 openstack_network_exporter[211160]: ERROR   19:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:41:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:41:01 compute-0 openstack_network_exporter[211160]: ERROR   19:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:41:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:41:03 compute-0 podman[257590]: 2025-10-02 19:41:03.769976911 +0000 UTC m=+0.126907543 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:41:03 compute-0 podman[257591]: 2025-10-02 19:41:03.794059021 +0000 UTC m=+0.143437832 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:41:05 compute-0 nova_compute[194781]: 2025-10-02 19:41:05.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:06 compute-0 nova_compute[194781]: 2025-10-02 19:41:06.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:07 compute-0 podman[257627]: 2025-10-02 19:41:07.774417143 +0000 UTC m=+0.128337492 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, distribution-scope=public, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9)
Oct 02 19:41:07 compute-0 podman[257628]: 2025-10-02 19:41:07.788538378 +0000 UTC m=+0.136722174 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct 02 19:41:07 compute-0 podman[257626]: 2025-10-02 19:41:07.789232776 +0000 UTC m=+0.151521267 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, name=ubi9-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, container_name=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 19:41:10 compute-0 nova_compute[194781]: 2025-10-02 19:41:10.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:11 compute-0 nova_compute[194781]: 2025-10-02 19:41:11.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.946 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.946 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.956 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.957 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.957 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.957 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.957 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.958 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:41:12.957520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.983 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 50590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.984 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.985 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.985 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.985 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.986 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.986 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.987 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.988 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:41:12.985929) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.988 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.988 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.988 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.989 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.989 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:41:12.988993) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.993 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.994 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.994 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.994 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.994 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.994 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.994 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.994 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.995 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.995 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.995 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.995 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.995 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.995 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.995 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:41:12.994312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.996 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.996 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:41:12.995198) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.996 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.996 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.996 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.996 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:41:12.996039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.997 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.997 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:41:12.997010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.997 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.997 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.997 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.998 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.998 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.998 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:12.998 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:41:12.998207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.060 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.061 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.061 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.062 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.063 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.063 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.064 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:41:13.062918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:41:13.064458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.065 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.065 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:41:13.066057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.107 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.108 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.108 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.109 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.109 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.109 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.110 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.110 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.110 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.110 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.111 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.111 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.112 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.112 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.113 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:41:13.109973) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:41:13.112460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.113 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.114 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.115 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.115 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.116 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.116 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.116 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.117 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:41:13.116706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.118 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.119 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.119 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:41:13.119404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.120 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.120 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.121 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.122 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.122 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.123 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.123 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:41:13.123050) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.124 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.124 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.125 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.125 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.126 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.126 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.127 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:41:13.126758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.127 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.128 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.129 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.130 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.130 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:41:13.130670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.131 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.131 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.132 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.132 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.133 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.134 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:41:13.134651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.135 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.135 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.136 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.136 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.137 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.137 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.137 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.137 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.138 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.138 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:41:13.138312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.138 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.139 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.139 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.140 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.140 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.140 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.141 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.141 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:41:13.141130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.141 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.142 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.142 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.142 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.143 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.143 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:41:13.143537) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.144 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.144 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.145 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.145 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.145 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.146 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:41:13.145965) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.146 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.147 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.147 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.148 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.148 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.148 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:41:13.148634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.148 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.149 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.150 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.151 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.151 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.152 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.152 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.152 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.152 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.153 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.153 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.153 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.153 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.153 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.153 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.154 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.154 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.154 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.154 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.155 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.155 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.155 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.155 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.156 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.156 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.156 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.156 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:41:13.156 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:41:14 compute-0 podman[257685]: 2025-10-02 19:41:14.738016159 +0000 UTC m=+0.109874070 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:41:14 compute-0 podman[257684]: 2025-10-02 19:41:14.754029985 +0000 UTC m=+0.119427345 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:41:15 compute-0 ovn_controller[97052]: 2025-10-02T19:41:15Z|00061|memory_trim|INFO|Detected inactivity (last active 30019 ms ago): trimming memory
Oct 02 19:41:15 compute-0 nova_compute[194781]: 2025-10-02 19:41:15.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:16 compute-0 nova_compute[194781]: 2025-10-02 19:41:16.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:18 compute-0 podman[257724]: 2025-10-02 19:41:18.713018528 +0000 UTC m=+0.082718259 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 19:41:18 compute-0 podman[257725]: 2025-10-02 19:41:18.773866585 +0000 UTC m=+0.125504396 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:41:20 compute-0 nova_compute[194781]: 2025-10-02 19:41:20.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:21 compute-0 nova_compute[194781]: 2025-10-02 19:41:21.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:23 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:23.744 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:41:23 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:23.745 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:41:23 compute-0 nova_compute[194781]: 2025-10-02 19:41:23.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:25 compute-0 nova_compute[194781]: 2025-10-02 19:41:25.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:26 compute-0 nova_compute[194781]: 2025-10-02 19:41:26.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:26 compute-0 nova_compute[194781]: 2025-10-02 19:41:26.155 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:26 compute-0 nova_compute[194781]: 2025-10-02 19:41:26.184 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Triggering sync for uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 19:41:26 compute-0 nova_compute[194781]: 2025-10-02 19:41:26.184 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:26 compute-0 nova_compute[194781]: 2025-10-02 19:41:26.185 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:26 compute-0 nova_compute[194781]: 2025-10-02 19:41:26.215 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:26 compute-0 podman[257768]: 2025-10-02 19:41:26.740162096 +0000 UTC m=+0.103410499 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:41:27 compute-0 nova_compute[194781]: 2025-10-02 19:41:27.063 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:28 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:28.748 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:41:28 compute-0 nova_compute[194781]: 2025-10-02 19:41:28.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:29 compute-0 podman[209015]: time="2025-10-02T19:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:41:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:41:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5225 "" "Go-http-client/1.1"
Oct 02 19:41:30 compute-0 nova_compute[194781]: 2025-10-02 19:41:30.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:30 compute-0 nova_compute[194781]: 2025-10-02 19:41:30.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:31 compute-0 nova_compute[194781]: 2025-10-02 19:41:31.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:31 compute-0 nova_compute[194781]: 2025-10-02 19:41:31.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:31 compute-0 nova_compute[194781]: 2025-10-02 19:41:31.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:41:31 compute-0 nova_compute[194781]: 2025-10-02 19:41:31.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:31 compute-0 openstack_network_exporter[211160]: ERROR   19:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:41:31 compute-0 openstack_network_exporter[211160]: ERROR   19:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:41:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:41:31 compute-0 openstack_network_exporter[211160]: ERROR   19:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:41:31 compute-0 openstack_network_exporter[211160]: ERROR   19:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:41:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:41:31 compute-0 openstack_network_exporter[211160]: ERROR   19:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:41:31 compute-0 nova_compute[194781]: 2025-10-02 19:41:31.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:31 compute-0 nova_compute[194781]: 2025-10-02 19:41:31.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:32 compute-0 nova_compute[194781]: 2025-10-02 19:41:32.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:34 compute-0 nova_compute[194781]: 2025-10-02 19:41:34.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:34 compute-0 nova_compute[194781]: 2025-10-02 19:41:34.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:34 compute-0 podman[257791]: 2025-10-02 19:41:34.716760189 +0000 UTC m=+0.091105442 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:41:34 compute-0 podman[257792]: 2025-10-02 19:41:34.733215427 +0000 UTC m=+0.101733785 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 02 19:41:35 compute-0 nova_compute[194781]: 2025-10-02 19:41:35.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:35 compute-0 nova_compute[194781]: 2025-10-02 19:41:35.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:36 compute-0 nova_compute[194781]: 2025-10-02 19:41:36.032 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:36 compute-0 nova_compute[194781]: 2025-10-02 19:41:36.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.075 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.076 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.076 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.077 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.161 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.221 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.222 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.281 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.283 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.375 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.377 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.439 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.918 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.920 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5136MB free_disk=72.47976684570312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.920 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:37 compute-0 nova_compute[194781]: 2025-10-02 19:41:37.921 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:38 compute-0 sshd-session[257843]: error: kex_exchange_identification: read: Connection reset by peer
Oct 02 19:41:38 compute-0 sshd-session[257843]: Connection reset by 45.140.17.97 port 7390
Oct 02 19:41:38 compute-0 nova_compute[194781]: 2025-10-02 19:41:38.015 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:41:38 compute-0 nova_compute[194781]: 2025-10-02 19:41:38.016 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:41:38 compute-0 nova_compute[194781]: 2025-10-02 19:41:38.016 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:41:38 compute-0 nova_compute[194781]: 2025-10-02 19:41:38.068 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:41:38 compute-0 nova_compute[194781]: 2025-10-02 19:41:38.095 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:41:38 compute-0 nova_compute[194781]: 2025-10-02 19:41:38.097 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:41:38 compute-0 nova_compute[194781]: 2025-10-02 19:41:38.098 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:38 compute-0 podman[257845]: 2025-10-02 19:41:38.747631444 +0000 UTC m=+0.106725888 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., config_id=edpm, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, vcs-type=git)
Oct 02 19:41:38 compute-0 podman[257846]: 2025-10-02 19:41:38.757394853 +0000 UTC m=+0.112963233 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 19:41:38 compute-0 podman[257844]: 2025-10-02 19:41:38.762356925 +0000 UTC m=+0.131453954 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Oct 02 19:41:40 compute-0 nova_compute[194781]: 2025-10-02 19:41:40.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:40 compute-0 nova_compute[194781]: 2025-10-02 19:41:40.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:41 compute-0 nova_compute[194781]: 2025-10-02 19:41:41.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:41 compute-0 nova_compute[194781]: 2025-10-02 19:41:41.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:43 compute-0 nova_compute[194781]: 2025-10-02 19:41:43.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:45 compute-0 nova_compute[194781]: 2025-10-02 19:41:45.099 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:41:45 compute-0 nova_compute[194781]: 2025-10-02 19:41:45.100 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:41:45 compute-0 nova_compute[194781]: 2025-10-02 19:41:45.100 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:41:45 compute-0 nova_compute[194781]: 2025-10-02 19:41:45.366 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:41:45 compute-0 nova_compute[194781]: 2025-10-02 19:41:45.367 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:41:45 compute-0 nova_compute[194781]: 2025-10-02 19:41:45.368 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:41:45 compute-0 nova_compute[194781]: 2025-10-02 19:41:45.369 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:41:45 compute-0 podman[257901]: 2025-10-02 19:41:45.769669192 +0000 UTC m=+0.128671850 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 19:41:45 compute-0 podman[257900]: 2025-10-02 19:41:45.774919451 +0000 UTC m=+0.143150945 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:41:45 compute-0 nova_compute[194781]: 2025-10-02 19:41:45.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:46 compute-0 nova_compute[194781]: 2025-10-02 19:41:46.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:47.487 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:47.488 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:47.490 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:48 compute-0 nova_compute[194781]: 2025-10-02 19:41:48.625 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:41:48 compute-0 nova_compute[194781]: 2025-10-02 19:41:48.652 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:41:48 compute-0 nova_compute[194781]: 2025-10-02 19:41:48.652 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:41:48 compute-0 nova_compute[194781]: 2025-10-02 19:41:48.911 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquiring lock "6eada58a-d077-43e5-ab40-dd45abbe38f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:48 compute-0 nova_compute[194781]: 2025-10-02 19:41:48.912 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:48 compute-0 nova_compute[194781]: 2025-10-02 19:41:48.931 2 DEBUG nova.compute.manager [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.072 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.073 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.081 2 DEBUG nova.virt.hardware [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.081 2 INFO nova.compute.claims [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.254 2 DEBUG nova.compute.provider_tree [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.286 2 DEBUG nova.scheduler.client.report [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.306 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.307 2 DEBUG nova.compute.manager [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.367 2 DEBUG nova.compute.manager [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.368 2 DEBUG nova.network.neutron [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.389 2 INFO nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.409 2 DEBUG nova.compute.manager [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.505 2 DEBUG nova.compute.manager [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.506 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.507 2 INFO nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Creating image(s)
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.507 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquiring lock "/var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.507 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "/var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.508 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "/var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.508 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquiring lock "a9843d922d50b317c389e448cbaaf7849a9d0409" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.509 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:49 compute-0 nova_compute[194781]: 2025-10-02 19:41:49.678 2 DEBUG nova.policy [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1de0891a14a8410da559e3197c8ff98b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5d458e53358c4398b6ba6051d5c82805', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 19:41:49 compute-0 podman[257945]: 2025-10-02 19:41:49.686367961 +0000 UTC m=+0.062048570 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:41:49 compute-0 podman[257946]: 2025-10-02 19:41:49.749837648 +0000 UTC m=+0.123395680 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:41:50 compute-0 nova_compute[194781]: 2025-10-02 19:41:50.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.096 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.210 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409.part --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.211 2 DEBUG nova.virt.images [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] c191839f-7364-41ce-80c8-eff8077fc750 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.213 2 DEBUG nova.privsep.utils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.214 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409.part /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.486 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409.part /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409.converted" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.491 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.571 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409.converted --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.573 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.587 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.645 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.647 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquiring lock "a9843d922d50b317c389e448cbaaf7849a9d0409" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.647 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.660 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.713 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.714 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.768 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk 1073741824" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.769 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.770 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.838 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.840 2 DEBUG nova.virt.disk.api [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Checking if we can resize image /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.840 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.898 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.899 2 DEBUG nova.virt.disk.api [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Cannot resize image /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.900 2 DEBUG nova.objects.instance [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lazy-loading 'migration_context' on Instance uuid 6eada58a-d077-43e5-ab40-dd45abbe38f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.919 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.920 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Ensure instance console log exists: /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.921 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.921 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.922 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:51 compute-0 nova_compute[194781]: 2025-10-02 19:41:51.940 2 DEBUG nova.network.neutron [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Successfully created port: b27e7b6f-4ab7-48d9-a674-eb640289b746 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.142 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Acquiring lock "8c3516d0-e1db-4043-8054-0efaf55f8158" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.143 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "8c3516d0-e1db-4043-8054-0efaf55f8158" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.171 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Acquiring lock "802f6003-69b3-4337-9652-641263d5864f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.172 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.173 2 DEBUG nova.compute.manager [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.203 2 DEBUG nova.compute.manager [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.282 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.283 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.290 2 DEBUG nova.virt.hardware [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.290 2 INFO nova.compute.claims [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.304 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.448 2 DEBUG nova.compute.provider_tree [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.464 2 DEBUG nova.scheduler.client.report [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.493 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.495 2 DEBUG nova.compute.manager [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.499 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.513 2 DEBUG nova.virt.hardware [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.514 2 INFO nova.compute.claims [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.562 2 DEBUG nova.compute.manager [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.563 2 DEBUG nova.network.neutron [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.587 2 INFO nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.606 2 DEBUG nova.compute.manager [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.689 2 DEBUG nova.compute.manager [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.690 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.690 2 INFO nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Creating image(s)
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.690 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Acquiring lock "/var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.691 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "/var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.691 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "/var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.705 2 DEBUG oslo_concurrency.processutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.739 2 DEBUG nova.compute.provider_tree [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.763 2 DEBUG nova.scheduler.client.report [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.784 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.785 2 DEBUG nova.compute.manager [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.806 2 DEBUG oslo_concurrency.processutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.806 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Acquiring lock "a9843d922d50b317c389e448cbaaf7849a9d0409" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.807 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.820 2 DEBUG oslo_concurrency.processutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.839 2 DEBUG nova.compute.manager [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.839 2 DEBUG nova.network.neutron [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.862 2 INFO nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.874 2 DEBUG oslo_concurrency.processutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.875 2 DEBUG oslo_concurrency.processutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.893 2 DEBUG nova.compute.manager [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.915 2 DEBUG oslo_concurrency.processutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.916 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.916 2 DEBUG oslo_concurrency.processutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.944 2 DEBUG nova.policy [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '18a01f0516b04f26b8bbb33e72f1f51f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a31d647ffb4e42d1acec402a98b5d8c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.981 2 DEBUG nova.compute.manager [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.983 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.984 2 INFO nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Creating image(s)
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.985 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Acquiring lock "/var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.985 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "/var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:52 compute-0 nova_compute[194781]: 2025-10-02 19:41:52.986 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "/var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.003 2 DEBUG oslo_concurrency.processutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.004 2 DEBUG nova.virt.disk.api [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Checking if we can resize image /var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.004 2 DEBUG oslo_concurrency.processutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.023 2 DEBUG oslo_concurrency.processutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.072 2 DEBUG oslo_concurrency.processutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.074 2 DEBUG nova.virt.disk.api [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Cannot resize image /var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.075 2 DEBUG nova.objects.instance [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 8c3516d0-e1db-4043-8054-0efaf55f8158 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.090 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.090 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Ensure instance console log exists: /var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.091 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.092 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.092 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.101 2 DEBUG oslo_concurrency.processutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.102 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Acquiring lock "a9843d922d50b317c389e448cbaaf7849a9d0409" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.103 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.120 2 DEBUG oslo_concurrency.processutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.176 2 DEBUG oslo_concurrency.processutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.178 2 DEBUG oslo_concurrency.processutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.221 2 DEBUG oslo_concurrency.processutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/disk 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.222 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.223 2 DEBUG oslo_concurrency.processutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.281 2 DEBUG oslo_concurrency.processutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.282 2 DEBUG nova.virt.disk.api [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Checking if we can resize image /var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.283 2 DEBUG oslo_concurrency.processutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.371 2 DEBUG oslo_concurrency.processutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.373 2 DEBUG nova.virt.disk.api [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Cannot resize image /var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.373 2 DEBUG nova.objects.instance [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lazy-loading 'migration_context' on Instance uuid 802f6003-69b3-4337-9652-641263d5864f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.386 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.387 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Ensure instance console log exists: /var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.387 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.388 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.388 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.524 2 DEBUG nova.policy [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e11fb23793a2452993b49534ed668211', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2f53c7449f8e46fb84491ca16ecef449', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 19:41:53 compute-0 nova_compute[194781]: 2025-10-02 19:41:53.965 2 DEBUG nova.network.neutron [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Successfully created port: 5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 19:41:54 compute-0 nova_compute[194781]: 2025-10-02 19:41:54.430 2 DEBUG nova.network.neutron [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Successfully updated port: b27e7b6f-4ab7-48d9-a674-eb640289b746 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:41:54 compute-0 nova_compute[194781]: 2025-10-02 19:41:54.459 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquiring lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:41:54 compute-0 nova_compute[194781]: 2025-10-02 19:41:54.460 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquired lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:41:54 compute-0 nova_compute[194781]: 2025-10-02 19:41:54.460 2 DEBUG nova.network.neutron [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:41:54 compute-0 nova_compute[194781]: 2025-10-02 19:41:54.786 2 DEBUG nova.network.neutron [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:41:54 compute-0 nova_compute[194781]: 2025-10-02 19:41:54.917 2 DEBUG nova.network.neutron [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Successfully updated port: 5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:41:54 compute-0 nova_compute[194781]: 2025-10-02 19:41:54.935 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Acquiring lock "refresh_cache-8c3516d0-e1db-4043-8054-0efaf55f8158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:41:54 compute-0 nova_compute[194781]: 2025-10-02 19:41:54.936 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Acquired lock "refresh_cache-8c3516d0-e1db-4043-8054-0efaf55f8158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:41:54 compute-0 nova_compute[194781]: 2025-10-02 19:41:54.936 2 DEBUG nova.network.neutron [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:41:55 compute-0 nova_compute[194781]: 2025-10-02 19:41:55.305 2 DEBUG nova.network.neutron [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:41:55 compute-0 nova_compute[194781]: 2025-10-02 19:41:55.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.015 2 DEBUG nova.network.neutron [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Successfully created port: 2758f8fe-aff6-42fb-9786-112689a5d452 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.410 2 DEBUG nova.network.neutron [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Updating instance_info_cache with network_info: [{"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.414 2 DEBUG nova.network.neutron [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Updating instance_info_cache with network_info: [{"id": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "address": "fa:16:3e:ab:56:eb", "network": {"id": "297d3600-5c6c-4db6-8640-a20cc0215d99", "bridge": "br-int", "label": "tempest-ServersTestJSON-1679256995-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31d647ffb4e42d1acec402a98b5d8c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a48a8a2-e2", "ovs_interfaceid": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.441 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Releasing lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.442 2 DEBUG nova.compute.manager [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Instance network_info: |[{"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.443 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Releasing lock "refresh_cache-8c3516d0-e1db-4043-8054-0efaf55f8158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.444 2 DEBUG nova.compute.manager [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Instance network_info: |[{"id": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "address": "fa:16:3e:ab:56:eb", "network": {"id": "297d3600-5c6c-4db6-8640-a20cc0215d99", "bridge": "br-int", "label": "tempest-ServersTestJSON-1679256995-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31d647ffb4e42d1acec402a98b5d8c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a48a8a2-e2", "ovs_interfaceid": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.449 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Start _get_guest_xml network_info=[{"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': 'c191839f-7364-41ce-80c8-eff8077fc750'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.454 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Start _get_guest_xml network_info=[{"id": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "address": "fa:16:3e:ab:56:eb", "network": {"id": "297d3600-5c6c-4db6-8640-a20cc0215d99", "bridge": "br-int", "label": "tempest-ServersTestJSON-1679256995-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31d647ffb4e42d1acec402a98b5d8c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a48a8a2-e2", "ovs_interfaceid": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': 'c191839f-7364-41ce-80c8-eff8077fc750'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.470 2 WARNING nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.475 2 WARNING nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.481 2 DEBUG nova.virt.libvirt.host [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.482 2 DEBUG nova.virt.libvirt.host [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.483 2 DEBUG nova.virt.libvirt.host [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.484 2 DEBUG nova.virt.libvirt.host [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.489 2 DEBUG nova.virt.libvirt.host [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.490 2 DEBUG nova.virt.libvirt.host [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.491 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.492 2 DEBUG nova.virt.hardware [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:40:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7ab5ea96-81dd-4496-8a1f-012f7d2c53c5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.494 2 DEBUG nova.virt.hardware [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.494 2 DEBUG nova.virt.hardware [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.495 2 DEBUG nova.virt.hardware [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.496 2 DEBUG nova.virt.hardware [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.497 2 DEBUG nova.virt.hardware [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.498 2 DEBUG nova.virt.hardware [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.498 2 DEBUG nova.virt.hardware [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.499 2 DEBUG nova.virt.hardware [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.500 2 DEBUG nova.virt.hardware [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.501 2 DEBUG nova.virt.hardware [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.510 2 DEBUG nova.virt.libvirt.vif [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950508224',display_name='tempest-ServerActionsTestJSON-server-1950508224',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950508224',id=6,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQ3/bi48ARS3VXn9iWcKo/JXrKXcAcgt+LOQWkb1k3Pe3wzNtwmWDod3uxRQb5Dp+at+GfgNvvsZcS9q05pPmKjxF66rj7w8mLvCmgF8foOmp3mBcRf5ivcSaS/PCliQ==',key_name='tempest-keypair-1857372306',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d458e53358c4398b6ba6051d5c82805',ramdisk_id='',reservation_id='r-80w0dyeq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-897514974',owner_user_name='tempest-ServerActionsTestJSON-897514974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:41:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1de0891a14a8410da559e3197c8ff98b',uuid=6eada58a-d077-43e5-ab40-dd45abbe38f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.510 2 DEBUG nova.network.os_vif_util [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Converting VIF {"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.512 2 DEBUG nova.network.os_vif_util [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:84:0f,bridge_name='br-int',has_traffic_filtering=True,id=b27e7b6f-4ab7-48d9-a674-eb640289b746,network=Network(a4e44b64-c472-49fb-ac29-fcbb65fb1bdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27e7b6f-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.514 2 DEBUG nova.objects.instance [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6eada58a-d077-43e5-ab40-dd45abbe38f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.516 2 DEBUG nova.virt.libvirt.host [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.517 2 DEBUG nova.virt.libvirt.host [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.517 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.518 2 DEBUG nova.virt.hardware [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:40:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7ab5ea96-81dd-4496-8a1f-012f7d2c53c5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.519 2 DEBUG nova.virt.hardware [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.519 2 DEBUG nova.virt.hardware [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.520 2 DEBUG nova.virt.hardware [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.521 2 DEBUG nova.virt.hardware [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.521 2 DEBUG nova.virt.hardware [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.522 2 DEBUG nova.virt.hardware [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.522 2 DEBUG nova.virt.hardware [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.523 2 DEBUG nova.virt.hardware [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.523 2 DEBUG nova.virt.hardware [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.524 2 DEBUG nova.virt.hardware [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.528 2 DEBUG nova.virt.libvirt.vif [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:41:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-81202730',display_name='tempest-ServersTestJSON-server-81202730',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-81202730',id=7,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCQcro1hwcfjg7GB90X95/03ec50Xm2PEfPdqjeZYQYKY9bbVvij0sSoEius/UBfyPPI9I1ThZw1xzFqjYDKw5BN5UcEhWKWa0l3gBzTf1ncxRbtf7XpQ+EWfdiquJHpw==',key_name='tempest-keypair-1857723335',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a31d647ffb4e42d1acec402a98b5d8c9',ramdisk_id='',reservation_id='r-k2rl1dpw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1875586447',owner_user_name='tempest-ServersTestJSON-1875586447-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:41:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18a01f0516b04f26b8bbb33e72f1f51f',uuid=8c3516d0-e1db-4043-8054-0efaf55f8158,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "address": "fa:16:3e:ab:56:eb", "network": {"id": "297d3600-5c6c-4db6-8640-a20cc0215d99", "bridge": "br-int", "label": "tempest-ServersTestJSON-1679256995-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31d647ffb4e42d1acec402a98b5d8c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a48a8a2-e2", "ovs_interfaceid": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.528 2 DEBUG nova.network.os_vif_util [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Converting VIF {"id": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "address": "fa:16:3e:ab:56:eb", "network": {"id": "297d3600-5c6c-4db6-8640-a20cc0215d99", "bridge": "br-int", "label": "tempest-ServersTestJSON-1679256995-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31d647ffb4e42d1acec402a98b5d8c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a48a8a2-e2", "ovs_interfaceid": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.529 2 DEBUG nova.network.os_vif_util [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:56:eb,bridge_name='br-int',has_traffic_filtering=True,id=5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d,network=Network(297d3600-5c6c-4db6-8640-a20cc0215d99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a48a8a2-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.530 2 DEBUG nova.objects.instance [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8c3516d0-e1db-4043-8054-0efaf55f8158 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.533 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <uuid>6eada58a-d077-43e5-ab40-dd45abbe38f3</uuid>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <name>instance-00000006</name>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <memory>131072</memory>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <nova:name>tempest-ServerActionsTestJSON-server-1950508224</nova:name>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:41:56</nova:creationTime>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <nova:flavor name="m1.nano">
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:memory>128</nova:memory>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:user uuid="1de0891a14a8410da559e3197c8ff98b">tempest-ServerActionsTestJSON-897514974-project-member</nova:user>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:project uuid="5d458e53358c4398b6ba6051d5c82805">tempest-ServerActionsTestJSON-897514974</nova:project>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="c191839f-7364-41ce-80c8-eff8077fc750"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:port uuid="b27e7b6f-4ab7-48d9-a674-eb640289b746">
Oct 02 19:41:56 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <system>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <entry name="serial">6eada58a-d077-43e5-ab40-dd45abbe38f3</entry>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <entry name="uuid">6eada58a-d077-43e5-ab40-dd45abbe38f3</entry>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </system>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <os>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   </os>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <features>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   </features>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.config"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:15:84:0f"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <target dev="tapb27e7b6f-4a"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/console.log" append="off"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <video>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </video>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:41:56 compute-0 nova_compute[194781]: </domain>
Oct 02 19:41:56 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.535 2 DEBUG nova.compute.manager [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Preparing to wait for external event network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.536 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquiring lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.537 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.537 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.538 2 DEBUG nova.virt.libvirt.vif [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950508224',display_name='tempest-ServerActionsTestJSON-server-1950508224',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950508224',id=6,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQ3/bi48ARS3VXn9iWcKo/JXrKXcAcgt+LOQWkb1k3Pe3wzNtwmWDod3uxRQb5Dp+at+GfgNvvsZcS9q05pPmKjxF66rj7w8mLvCmgF8foOmp3mBcRf5ivcSaS/PCliQ==',key_name='tempest-keypair-1857372306',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d458e53358c4398b6ba6051d5c82805',ramdisk_id='',reservation_id='r-80w0dyeq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-897514974',owner_user_name='tempest-ServerActionsTestJSON-897514974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:41:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1de0891a14a8410da559e3197c8ff98b',uuid=6eada58a-d077-43e5-ab40-dd45abbe38f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.539 2 DEBUG nova.network.os_vif_util [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Converting VIF {"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.540 2 DEBUG nova.network.os_vif_util [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:84:0f,bridge_name='br-int',has_traffic_filtering=True,id=b27e7b6f-4ab7-48d9-a674-eb640289b746,network=Network(a4e44b64-c472-49fb-ac29-fcbb65fb1bdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27e7b6f-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.541 2 DEBUG os_vif [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:84:0f,bridge_name='br-int',has_traffic_filtering=True,id=b27e7b6f-4ab7-48d9-a674-eb640289b746,network=Network(a4e44b64-c472-49fb-ac29-fcbb65fb1bdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27e7b6f-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.543 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb27e7b6f-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb27e7b6f-4a, col_values=(('external_ids', {'iface-id': 'b27e7b6f-4ab7-48d9-a674-eb640289b746', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:84:0f', 'vm-uuid': '6eada58a-d077-43e5-ab40-dd45abbe38f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:56 compute-0 NetworkManager[52324]: <info>  [1759434116.5534] manager: (tapb27e7b6f-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.555 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <uuid>8c3516d0-e1db-4043-8054-0efaf55f8158</uuid>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <name>instance-00000007</name>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <memory>131072</memory>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <nova:name>tempest-ServersTestJSON-server-81202730</nova:name>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:41:56</nova:creationTime>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <nova:flavor name="m1.nano">
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:memory>128</nova:memory>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:user uuid="18a01f0516b04f26b8bbb33e72f1f51f">tempest-ServersTestJSON-1875586447-project-member</nova:user>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:project uuid="a31d647ffb4e42d1acec402a98b5d8c9">tempest-ServersTestJSON-1875586447</nova:project>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="c191839f-7364-41ce-80c8-eff8077fc750"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         <nova:port uuid="5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d">
Oct 02 19:41:56 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <system>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <entry name="serial">8c3516d0-e1db-4043-8054-0efaf55f8158</entry>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <entry name="uuid">8c3516d0-e1db-4043-8054-0efaf55f8158</entry>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </system>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <os>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   </os>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <features>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   </features>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/disk"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/disk.config"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:ab:56:eb"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <target dev="tap5a48a8a2-e2"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/console.log" append="off"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <video>
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </video>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:41:56 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:41:56 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:41:56 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:41:56 compute-0 nova_compute[194781]: </domain>
Oct 02 19:41:56 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.556 2 DEBUG nova.compute.manager [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Preparing to wait for external event network-vif-plugged-5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.556 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Acquiring lock "8c3516d0-e1db-4043-8054-0efaf55f8158-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.557 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "8c3516d0-e1db-4043-8054-0efaf55f8158-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.557 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "8c3516d0-e1db-4043-8054-0efaf55f8158-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.558 2 DEBUG nova.virt.libvirt.vif [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:41:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-81202730',display_name='tempest-ServersTestJSON-server-81202730',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-81202730',id=7,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCQcro1hwcfjg7GB90X95/03ec50Xm2PEfPdqjeZYQYKY9bbVvij0sSoEius/UBfyPPI9I1ThZw1xzFqjYDKw5BN5UcEhWKWa0l3gBzTf1ncxRbtf7XpQ+EWfdiquJHpw==',key_name='tempest-keypair-1857723335',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a31d647ffb4e42d1acec402a98b5d8c9',ramdisk_id='',reservation_id='r-k2rl1dpw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1875586447',owner_user_name='tempest-ServersTestJSON-1875586447-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:41:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18a01f0516b04f26b8bbb33e72f1f51f',uuid=8c3516d0-e1db-4043-8054-0efaf55f8158,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "address": "fa:16:3e:ab:56:eb", "network": {"id": "297d3600-5c6c-4db6-8640-a20cc0215d99", "bridge": "br-int", "label": "tempest-ServersTestJSON-1679256995-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31d647ffb4e42d1acec402a98b5d8c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a48a8a2-e2", "ovs_interfaceid": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.559 2 DEBUG nova.network.os_vif_util [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Converting VIF {"id": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "address": "fa:16:3e:ab:56:eb", "network": {"id": "297d3600-5c6c-4db6-8640-a20cc0215d99", "bridge": "br-int", "label": "tempest-ServersTestJSON-1679256995-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31d647ffb4e42d1acec402a98b5d8c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a48a8a2-e2", "ovs_interfaceid": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.560 2 DEBUG nova.network.os_vif_util [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:56:eb,bridge_name='br-int',has_traffic_filtering=True,id=5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d,network=Network(297d3600-5c6c-4db6-8640-a20cc0215d99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a48a8a2-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.560 2 DEBUG os_vif [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:56:eb,bridge_name='br-int',has_traffic_filtering=True,id=5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d,network=Network(297d3600-5c6c-4db6-8640-a20cc0215d99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a48a8a2-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.566 2 INFO os_vif [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:84:0f,bridge_name='br-int',has_traffic_filtering=True,id=b27e7b6f-4ab7-48d9-a674-eb640289b746,network=Network(a4e44b64-c472-49fb-ac29-fcbb65fb1bdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27e7b6f-4a')
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.568 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.570 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.575 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a48a8a2-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.576 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a48a8a2-e2, col_values=(('external_ids', {'iface-id': '5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:56:eb', 'vm-uuid': '8c3516d0-e1db-4043-8054-0efaf55f8158'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:56 compute-0 NetworkManager[52324]: <info>  [1759434116.5810] manager: (tap5a48a8a2-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.596 2 INFO os_vif [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:56:eb,bridge_name='br-int',has_traffic_filtering=True,id=5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d,network=Network(297d3600-5c6c-4db6-8640-a20cc0215d99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a48a8a2-e2')
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.624 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.625 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.625 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] No VIF found with MAC fa:16:3e:15:84:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.626 2 INFO nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Using config drive
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.665 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.665 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.665 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] No VIF found with MAC fa:16:3e:ab:56:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.666 2 INFO nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Using config drive
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.939 2 DEBUG nova.compute.manager [req-6acb6c06-f02c-42b3-a96d-8cfe9247a65f req-84aae5b6-9cc7-4ca6-a51c-da50c87a7b95 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received event network-changed-b27e7b6f-4ab7-48d9-a674-eb640289b746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.939 2 DEBUG nova.compute.manager [req-6acb6c06-f02c-42b3-a96d-8cfe9247a65f req-84aae5b6-9cc7-4ca6-a51c-da50c87a7b95 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Refreshing instance network info cache due to event network-changed-b27e7b6f-4ab7-48d9-a674-eb640289b746. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.940 2 DEBUG oslo_concurrency.lockutils [req-6acb6c06-f02c-42b3-a96d-8cfe9247a65f req-84aae5b6-9cc7-4ca6-a51c-da50c87a7b95 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.940 2 DEBUG oslo_concurrency.lockutils [req-6acb6c06-f02c-42b3-a96d-8cfe9247a65f req-84aae5b6-9cc7-4ca6-a51c-da50c87a7b95 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:41:56 compute-0 nova_compute[194781]: 2025-10-02 19:41:56.941 2 DEBUG nova.network.neutron [req-6acb6c06-f02c-42b3-a96d-8cfe9247a65f req-84aae5b6-9cc7-4ca6-a51c-da50c87a7b95 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Refreshing network info cache for port b27e7b6f-4ab7-48d9-a674-eb640289b746 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.034 2 DEBUG nova.compute.manager [req-c9c9706b-f080-40af-b46f-34229be12c80 req-f13b1f23-5bce-48f8-900a-2a3fa73b7684 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Received event network-changed-5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.034 2 DEBUG nova.compute.manager [req-c9c9706b-f080-40af-b46f-34229be12c80 req-f13b1f23-5bce-48f8-900a-2a3fa73b7684 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Refreshing instance network info cache due to event network-changed-5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.035 2 DEBUG oslo_concurrency.lockutils [req-c9c9706b-f080-40af-b46f-34229be12c80 req-f13b1f23-5bce-48f8-900a-2a3fa73b7684 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-8c3516d0-e1db-4043-8054-0efaf55f8158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.035 2 DEBUG oslo_concurrency.lockutils [req-c9c9706b-f080-40af-b46f-34229be12c80 req-f13b1f23-5bce-48f8-900a-2a3fa73b7684 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-8c3516d0-e1db-4043-8054-0efaf55f8158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.035 2 DEBUG nova.network.neutron [req-c9c9706b-f080-40af-b46f-34229be12c80 req-f13b1f23-5bce-48f8-900a-2a3fa73b7684 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Refreshing network info cache for port 5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.169 2 INFO nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Creating config drive at /var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/disk.config
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.180 2 DEBUG oslo_concurrency.processutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvlq_96f4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.237 2 INFO nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Creating config drive at /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.config
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.248 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprpet8d8g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.328 2 DEBUG oslo_concurrency.processutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvlq_96f4" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.375 2 DEBUG oslo_concurrency.processutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprpet8d8g" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:41:57 compute-0 kernel: tap5a48a8a2-e2: entered promiscuous mode
Oct 02 19:41:57 compute-0 NetworkManager[52324]: <info>  [1759434117.4452] manager: (tap5a48a8a2-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Oct 02 19:41:57 compute-0 ovn_controller[97052]: 2025-10-02T19:41:57Z|00062|binding|INFO|Claiming lport 5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d for this chassis.
Oct 02 19:41:57 compute-0 ovn_controller[97052]: 2025-10-02T19:41:57Z|00063|binding|INFO|5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d: Claiming fa:16:3e:ab:56:eb 10.100.0.7
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.467 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:56:eb 10.100.0.7'], port_security=['fa:16:3e:ab:56:eb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8c3516d0-e1db-4043-8054-0efaf55f8158', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-297d3600-5c6c-4db6-8640-a20cc0215d99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31d647ffb4e42d1acec402a98b5d8c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be514375-c77b-41ee-bc81-d536a625090a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c41abe9-f69b-4cd8-8e79-3a3c37342998, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.469 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d in datapath 297d3600-5c6c-4db6-8640-a20cc0215d99 bound to our chassis
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.471 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 297d3600-5c6c-4db6-8640-a20cc0215d99
Oct 02 19:41:57 compute-0 kernel: tapb27e7b6f-4a: entered promiscuous mode
Oct 02 19:41:57 compute-0 NetworkManager[52324]: <info>  [1759434117.4793] manager: (tapb27e7b6f-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Oct 02 19:41:57 compute-0 ovn_controller[97052]: 2025-10-02T19:41:57Z|00064|binding|INFO|Setting lport 5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d ovn-installed in OVS
Oct 02 19:41:57 compute-0 ovn_controller[97052]: 2025-10-02T19:41:57Z|00065|binding|INFO|Setting lport 5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d up in Southbound
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:57 compute-0 ovn_controller[97052]: 2025-10-02T19:41:57Z|00066|if_status|INFO|Not updating pb chassis for b27e7b6f-4ab7-48d9-a674-eb640289b746 now as sb is readonly
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.486 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea31200-95fa-468c-b347-40294f2cc4f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.488 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap297d3600-51 in ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.489 246899 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap297d3600-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.490 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[82515e6a-9d98-43bb-bc68-e2edf3769808]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.491 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[cb1b7955-65ec-4859-bd54-790411d00a46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 ovn_controller[97052]: 2025-10-02T19:41:57Z|00067|binding|INFO|Claiming lport b27e7b6f-4ab7-48d9-a674-eb640289b746 for this chassis.
Oct 02 19:41:57 compute-0 ovn_controller[97052]: 2025-10-02T19:41:57Z|00068|binding|INFO|b27e7b6f-4ab7-48d9-a674-eb640289b746: Claiming fa:16:3e:15:84:0f 10.100.0.3
Oct 02 19:41:57 compute-0 ovn_controller[97052]: 2025-10-02T19:41:57Z|00069|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.521 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[63baaeae-8463-441e-90c4-a48e79184901]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 systemd-udevd[258100]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:41:57 compute-0 systemd-udevd[258098]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:57 compute-0 ovn_controller[97052]: 2025-10-02T19:41:57Z|00070|binding|INFO|Setting lport b27e7b6f-4ab7-48d9-a674-eb640289b746 ovn-installed in OVS
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:57 compute-0 ovn_controller[97052]: 2025-10-02T19:41:57Z|00071|binding|INFO|Setting lport b27e7b6f-4ab7-48d9-a674-eb640289b746 up in Southbound
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.534 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:84:0f 10.100.0.3'], port_security=['fa:16:3e:15:84:0f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6eada58a-d077-43e5-ab40-dd45abbe38f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d458e53358c4398b6ba6051d5c82805', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d169388-279d-4835-af73-74628348527d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61d3b384-7807-48c7-ac4b-e6e147bd5ac4, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=b27e7b6f-4ab7-48d9-a674-eb640289b746) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:41:57 compute-0 systemd-machined[154795]: New machine qemu-6-instance-00000007.
Oct 02 19:41:57 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000007.
Oct 02 19:41:57 compute-0 NetworkManager[52324]: <info>  [1759434117.5519] device (tap5a48a8a2-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:41:57 compute-0 NetworkManager[52324]: <info>  [1759434117.5530] device (tapb27e7b6f-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:41:57 compute-0 NetworkManager[52324]: <info>  [1759434117.5540] device (tap5a48a8a2-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:41:57 compute-0 NetworkManager[52324]: <info>  [1759434117.5545] device (tapb27e7b6f-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.553 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[885ba619-e257-4585-b8f0-e4f4a0deb017]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 systemd-machined[154795]: New machine qemu-7-instance-00000006.
Oct 02 19:41:57 compute-0 podman[258062]: 2025-10-02 19:41:57.576122228 +0000 UTC m=+0.149372801 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.584 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[ca067e57-0818-44d5-8a97-9da73c447f6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000006.
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.589 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[9853d893-0328-4d6c-8d73-6901ee3dfbb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 NetworkManager[52324]: <info>  [1759434117.5918] manager: (tap297d3600-50): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.623 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[7448331c-6f54-4d02-b951-a7be81658b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.626 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9f607f-1ba5-4412-9fe0-159d055a0cd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 NetworkManager[52324]: <info>  [1759434117.6545] device (tap297d3600-50): carrier: link connected
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.658 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6e1775-2942-4a37-ad29-27e2571e4109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.677 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[71abc6e8-25cc-4e8a-b719-7f3599fce768]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap297d3600-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:ad:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527827, 'reachable_time': 41545, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258146, 'error': None, 'target': 'ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.693 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[312b3b1d-94e2-400d-92f7-c6e2315d0173]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:ad0f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527827, 'tstamp': 527827}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258148, 'error': None, 'target': 'ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.718 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[21aa166b-f898-4b25-ae11-098a8afa68b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap297d3600-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:ad:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527827, 'reachable_time': 41545, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258149, 'error': None, 'target': 'ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.753 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[e0502c51-031b-4aa5-9b6f-fa7aa1ccf5b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.834 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[12de5a29-74f9-4360-922f-111af0a9b0e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.837 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap297d3600-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.837 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.838 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap297d3600-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:41:57 compute-0 kernel: tap297d3600-50: entered promiscuous mode
Oct 02 19:41:57 compute-0 NetworkManager[52324]: <info>  [1759434117.8422] manager: (tap297d3600-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.852 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap297d3600-50, col_values=(('external_ids', {'iface-id': '74eec250-433e-49a6-99c0-57cb4cde4831'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:41:57 compute-0 ovn_controller[97052]: 2025-10-02T19:41:57Z|00072|binding|INFO|Releasing lport 74eec250-433e-49a6-99c0-57cb4cde4831 from this chassis (sb_readonly=0)
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.856 105943 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/297d3600-5c6c-4db6-8640-a20cc0215d99.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/297d3600-5c6c-4db6-8640-a20cc0215d99.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.857 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[8325d0f1-649f-4935-b086-b97f1737b101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.858 105943 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: global
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     log         /dev/log local0 debug
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     log-tag     haproxy-metadata-proxy-297d3600-5c6c-4db6-8640-a20cc0215d99
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     user        root
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     group       root
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     maxconn     1024
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     pidfile     /var/lib/neutron/external/pids/297d3600-5c6c-4db6-8640-a20cc0215d99.pid.haproxy
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     daemon
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: defaults
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     log global
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     mode http
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     option httplog
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     option dontlognull
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     option http-server-close
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     option forwardfor
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     retries                 3
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     timeout http-request    30s
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     timeout connect         30s
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     timeout client          32s
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     timeout server          32s
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     timeout http-keep-alive 30s
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: listen listener
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     bind 169.254.169.254:80
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:     http-request add-header X-OVN-Network-ID 297d3600-5c6c-4db6-8640-a20cc0215d99
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 19:41:57 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:57.861 105943 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99', 'env', 'PROCESS_TAG=haproxy-297d3600-5c6c-4db6-8640-a20cc0215d99', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/297d3600-5c6c-4db6-8640-a20cc0215d99.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 19:41:57 compute-0 nova_compute[194781]: 2025-10-02 19:41:57.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:58 compute-0 podman[258194]: 2025-10-02 19:41:58.392390379 +0000 UTC m=+0.122859556 container create 2f92129f74c3be744f2fef5b362f75e06d7779e1e38473c6feb423663e0d06d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 19:41:58 compute-0 podman[258194]: 2025-10-02 19:41:58.328373558 +0000 UTC m=+0.058842835 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:41:58 compute-0 systemd[1]: Started libpod-conmon-2f92129f74c3be744f2fef5b362f75e06d7779e1e38473c6feb423663e0d06d0.scope.
Oct 02 19:41:58 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fa99f545af99ada6f797850e3ea262f89b1d45e0b20487124a2103bfb2ecd52/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.514 2 DEBUG nova.network.neutron [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Successfully updated port: 2758f8fe-aff6-42fb-9786-112689a5d452 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:41:58 compute-0 podman[258194]: 2025-10-02 19:41:58.517557644 +0000 UTC m=+0.248026851 container init 2f92129f74c3be744f2fef5b362f75e06d7779e1e38473c6feb423663e0d06d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 19:41:58 compute-0 podman[258194]: 2025-10-02 19:41:58.530873408 +0000 UTC m=+0.261342585 container start 2f92129f74c3be744f2fef5b362f75e06d7779e1e38473c6feb423663e0d06d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.532 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Acquiring lock "refresh_cache-802f6003-69b3-4337-9652-641263d5864f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.532 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Acquired lock "refresh_cache-802f6003-69b3-4337-9652-641263d5864f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.533 2 DEBUG nova.network.neutron [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:41:58 compute-0 neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99[258207]: [NOTICE]   (258211) : New worker (258213) forked
Oct 02 19:41:58 compute-0 neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99[258207]: [NOTICE]   (258211) : Loading success.
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.590 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434118.5891263, 6eada58a-d077-43e5-ab40-dd45abbe38f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.590 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] VM Started (Lifecycle Event)
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.595 105943 INFO neutron.agent.ovn.metadata.agent [-] Port b27e7b6f-4ab7-48d9-a674-eb640289b746 in datapath a4e44b64-c472-49fb-ac29-fcbb65fb1bdc unbound from our chassis
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.598 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a4e44b64-c472-49fb-ac29-fcbb65fb1bdc
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.610 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[a0187cd7-1025-4f01-ad73-062cd170af1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.611 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa4e44b64-c1 in ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.616 246899 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa4e44b64-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.616 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[0cecb7db-3136-47e2-9eb2-b27cc9d4e4c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.618 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc58ddf-9564-4fd3-8eed-b4bf9710b93e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.620 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.627 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434118.589457, 6eada58a-d077-43e5-ab40-dd45abbe38f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.627 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] VM Paused (Lifecycle Event)
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.641 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[3e663bbd-d58a-48fd-971f-84e4b0c0c670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.652 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.668 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5bdbe1-de74-417a-982f-8e4d6f06b79f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.670 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.699 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.706 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434118.7057433, 8c3516d0-e1db-4043-8054-0efaf55f8158 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.706 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] VM Started (Lifecycle Event)
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.708 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[8660c646-45f8-40aa-9b1e-f79a5cf0ba2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.714 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd255f4-e20d-485a-aad4-c4edf3b18dc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 NetworkManager[52324]: <info>  [1759434118.7155] manager: (tapa4e44b64-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Oct 02 19:41:58 compute-0 systemd-udevd[258129]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.730 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.740 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434118.7059119, 8c3516d0-e1db-4043-8054-0efaf55f8158 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.740 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] VM Paused (Lifecycle Event)
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.755 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[6b35560f-7c3e-445e-b25c-8a18acc19e47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.758 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ffd39a-2e31-4fcf-a2fd-3e69e9c7b9af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.763 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.768 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:41:58 compute-0 NetworkManager[52324]: <info>  [1759434118.7814] device (tapa4e44b64-c0): carrier: link connected
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.787 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.787 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[29841fc2-7746-431a-94ad-44b6ad65e994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.802 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[6e14a837-c8ea-43c5-b711-66f9cb8a7bdd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa4e44b64-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c2:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527940, 'reachable_time': 39327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258233, 'error': None, 'target': 'ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.817 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[db0b34bf-4a37-485b-8efc-6cf26621edcb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:c2db'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527940, 'tstamp': 527940}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258234, 'error': None, 'target': 'ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.833 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[4a450d77-6061-40e4-b118-ed5b59275e71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa4e44b64-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c2:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527940, 'reachable_time': 39327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258235, 'error': None, 'target': 'ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.841 2 DEBUG nova.network.neutron [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.865 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[eda0a61d-58de-47cd-bdc4-bf2a69d9f751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.903 2 DEBUG nova.network.neutron [req-c9c9706b-f080-40af-b46f-34229be12c80 req-f13b1f23-5bce-48f8-900a-2a3fa73b7684 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Updated VIF entry in instance network info cache for port 5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.904 2 DEBUG nova.network.neutron [req-c9c9706b-f080-40af-b46f-34229be12c80 req-f13b1f23-5bce-48f8-900a-2a3fa73b7684 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Updating instance_info_cache with network_info: [{"id": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "address": "fa:16:3e:ab:56:eb", "network": {"id": "297d3600-5c6c-4db6-8640-a20cc0215d99", "bridge": "br-int", "label": "tempest-ServersTestJSON-1679256995-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31d647ffb4e42d1acec402a98b5d8c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a48a8a2-e2", "ovs_interfaceid": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.929 2 DEBUG oslo_concurrency.lockutils [req-c9c9706b-f080-40af-b46f-34229be12c80 req-f13b1f23-5bce-48f8-900a-2a3fa73b7684 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-8c3516d0-e1db-4043-8054-0efaf55f8158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.942 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b7c9d8-4abf-45aa-a7cf-0505d871f47f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.945 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4e44b64-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.947 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.948 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4e44b64-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:41:58 compute-0 NetworkManager[52324]: <info>  [1759434118.9527] manager: (tapa4e44b64-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 02 19:41:58 compute-0 kernel: tapa4e44b64-c0: entered promiscuous mode
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.959 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa4e44b64-c0, col_values=(('external_ids', {'iface-id': 'bd80466a-6146-45a7-be35-ec332e1ee93c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:58 compute-0 ovn_controller[97052]: 2025-10-02T19:41:58Z|00073|binding|INFO|Releasing lport bd80466a-6146-45a7-be35-ec332e1ee93c from this chassis (sb_readonly=0)
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.964 105943 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a4e44b64-c472-49fb-ac29-fcbb65fb1bdc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a4e44b64-c472-49fb-ac29-fcbb65fb1bdc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.966 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[037d8c21-cbf2-48da-92ad-78bce0a493df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.967 105943 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: global
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     log         /dev/log local0 debug
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     log-tag     haproxy-metadata-proxy-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     user        root
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     group       root
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     maxconn     1024
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     pidfile     /var/lib/neutron/external/pids/a4e44b64-c472-49fb-ac29-fcbb65fb1bdc.pid.haproxy
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     daemon
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: defaults
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     log global
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     mode http
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     option httplog
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     option dontlognull
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     option http-server-close
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     option forwardfor
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     retries                 3
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     timeout http-request    30s
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     timeout connect         30s
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     timeout client          32s
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     timeout server          32s
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     timeout http-keep-alive 30s
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: listen listener
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     bind 169.254.169.254:80
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:     http-request add-header X-OVN-Network-ID a4e44b64-c472-49fb-ac29-fcbb65fb1bdc
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 19:41:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:41:58.968 105943 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'env', 'PROCESS_TAG=haproxy-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a4e44b64-c472-49fb-ac29-fcbb65fb1bdc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 19:41:58 compute-0 nova_compute[194781]: 2025-10-02 19:41:58.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:41:59 compute-0 nova_compute[194781]: 2025-10-02 19:41:59.290 2 DEBUG nova.network.neutron [req-6acb6c06-f02c-42b3-a96d-8cfe9247a65f req-84aae5b6-9cc7-4ca6-a51c-da50c87a7b95 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Updated VIF entry in instance network info cache for port b27e7b6f-4ab7-48d9-a674-eb640289b746. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:41:59 compute-0 nova_compute[194781]: 2025-10-02 19:41:59.290 2 DEBUG nova.network.neutron [req-6acb6c06-f02c-42b3-a96d-8cfe9247a65f req-84aae5b6-9cc7-4ca6-a51c-da50c87a7b95 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Updating instance_info_cache with network_info: [{"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:41:59 compute-0 nova_compute[194781]: 2025-10-02 19:41:59.309 2 DEBUG oslo_concurrency.lockutils [req-6acb6c06-f02c-42b3-a96d-8cfe9247a65f req-84aae5b6-9cc7-4ca6-a51c-da50c87a7b95 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:41:59 compute-0 podman[258267]: 2025-10-02 19:41:59.486007829 +0000 UTC m=+0.092569871 container create c6312fda8f42166a6ae354b7a658446ae29ef93bb84a6bedc4d7d22b8afe7294 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 19:41:59 compute-0 podman[258267]: 2025-10-02 19:41:59.437641324 +0000 UTC m=+0.044203426 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:41:59 compute-0 systemd[1]: Started libpod-conmon-c6312fda8f42166a6ae354b7a658446ae29ef93bb84a6bedc4d7d22b8afe7294.scope.
Oct 02 19:41:59 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 02 19:41:59 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:41:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e66001d246673b0665425e2087498d4517c4034ffe6806f92b679c8f5f88a61a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 19:41:59 compute-0 podman[258267]: 2025-10-02 19:41:59.630788077 +0000 UTC m=+0.237350109 container init c6312fda8f42166a6ae354b7a658446ae29ef93bb84a6bedc4d7d22b8afe7294 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 19:41:59 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 02 19:41:59 compute-0 podman[258267]: 2025-10-02 19:41:59.653821309 +0000 UTC m=+0.260383331 container start c6312fda8f42166a6ae354b7a658446ae29ef93bb84a6bedc4d7d22b8afe7294 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:41:59 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[258282]: [NOTICE]   (258303) : New worker (258305) forked
Oct 02 19:41:59 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[258282]: [NOTICE]   (258303) : Loading success.
Oct 02 19:41:59 compute-0 podman[209015]: time="2025-10-02T19:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:41:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34447 "" "Go-http-client/1.1"
Oct 02 19:41:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6137 "" "Go-http-client/1.1"
Oct 02 19:42:00 compute-0 nova_compute[194781]: 2025-10-02 19:42:00.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.074 2 DEBUG nova.network.neutron [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Updating instance_info_cache with network_info: [{"id": "2758f8fe-aff6-42fb-9786-112689a5d452", "address": "fa:16:3e:72:4f:b3", "network": {"id": "8a351c78-806e-4438-a270-95c4b5a89d4d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1274704635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f53c7449f8e46fb84491ca16ecef449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2758f8fe-af", "ovs_interfaceid": "2758f8fe-aff6-42fb-9786-112689a5d452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.103 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Releasing lock "refresh_cache-802f6003-69b3-4337-9652-641263d5864f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.104 2 DEBUG nova.compute.manager [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Instance network_info: |[{"id": "2758f8fe-aff6-42fb-9786-112689a5d452", "address": "fa:16:3e:72:4f:b3", "network": {"id": "8a351c78-806e-4438-a270-95c4b5a89d4d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1274704635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f53c7449f8e46fb84491ca16ecef449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2758f8fe-af", "ovs_interfaceid": "2758f8fe-aff6-42fb-9786-112689a5d452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.107 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Start _get_guest_xml network_info=[{"id": "2758f8fe-aff6-42fb-9786-112689a5d452", "address": "fa:16:3e:72:4f:b3", "network": {"id": "8a351c78-806e-4438-a270-95c4b5a89d4d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1274704635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f53c7449f8e46fb84491ca16ecef449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2758f8fe-af", "ovs_interfaceid": "2758f8fe-aff6-42fb-9786-112689a5d452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': 'c191839f-7364-41ce-80c8-eff8077fc750'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.116 2 WARNING nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.123 2 DEBUG nova.virt.libvirt.host [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.124 2 DEBUG nova.virt.libvirt.host [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.133 2 DEBUG nova.virt.libvirt.host [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.134 2 DEBUG nova.virt.libvirt.host [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.134 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.134 2 DEBUG nova.virt.hardware [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:40:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7ab5ea96-81dd-4496-8a1f-012f7d2c53c5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.135 2 DEBUG nova.virt.hardware [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.135 2 DEBUG nova.virt.hardware [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.135 2 DEBUG nova.virt.hardware [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.136 2 DEBUG nova.virt.hardware [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.136 2 DEBUG nova.virt.hardware [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.136 2 DEBUG nova.virt.hardware [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.137 2 DEBUG nova.virt.hardware [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.137 2 DEBUG nova.virt.hardware [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.137 2 DEBUG nova.virt.hardware [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.137 2 DEBUG nova.virt.hardware [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.142 2 DEBUG nova.virt.libvirt.vif [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:41:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-690272599',display_name='tempest-ServersTestManualDisk-server-690272599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-690272599',id=8,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHqaX30hCJdLmbOfxgofGE14eqhapGuSxNbf8004P16cyFX+BDeR0BOc8E0L54R4mGxNJDr8fyZr+4oTbD/zyFtWB/zaHTHBsmExBW6jXPw9zFL+x3sOHyE0zXP3jIqk3Q==',key_name='tempest-keypair-460882991',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2f53c7449f8e46fb84491ca16ecef449',ramdisk_id='',reservation_id='r-cg2fwbiz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1647165318',owner_user_name='tempest-ServersTestManualDisk-1647165318-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:41:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e11fb23793a2452993b49534ed668211',uuid=802f6003-69b3-4337-9652-641263d5864f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2758f8fe-aff6-42fb-9786-112689a5d452", "address": "fa:16:3e:72:4f:b3", "network": {"id": "8a351c78-806e-4438-a270-95c4b5a89d4d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1274704635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f53c7449f8e46fb84491ca16ecef449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2758f8fe-af", "ovs_interfaceid": "2758f8fe-aff6-42fb-9786-112689a5d452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.142 2 DEBUG nova.network.os_vif_util [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Converting VIF {"id": "2758f8fe-aff6-42fb-9786-112689a5d452", "address": "fa:16:3e:72:4f:b3", "network": {"id": "8a351c78-806e-4438-a270-95c4b5a89d4d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1274704635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f53c7449f8e46fb84491ca16ecef449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2758f8fe-af", "ovs_interfaceid": "2758f8fe-aff6-42fb-9786-112689a5d452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.143 2 DEBUG nova.network.os_vif_util [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:4f:b3,bridge_name='br-int',has_traffic_filtering=True,id=2758f8fe-aff6-42fb-9786-112689a5d452,network=Network(8a351c78-806e-4438-a270-95c4b5a89d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2758f8fe-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.144 2 DEBUG nova.objects.instance [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lazy-loading 'pci_devices' on Instance uuid 802f6003-69b3-4337-9652-641263d5864f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.157 2 DEBUG nova.compute.manager [req-a2cd1f91-5650-4007-92bf-74439066a751 req-21232956-46c9-4744-a546-2a881ada3a9a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Received event network-changed-2758f8fe-aff6-42fb-9786-112689a5d452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.157 2 DEBUG nova.compute.manager [req-a2cd1f91-5650-4007-92bf-74439066a751 req-21232956-46c9-4744-a546-2a881ada3a9a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Refreshing instance network info cache due to event network-changed-2758f8fe-aff6-42fb-9786-112689a5d452. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.158 2 DEBUG oslo_concurrency.lockutils [req-a2cd1f91-5650-4007-92bf-74439066a751 req-21232956-46c9-4744-a546-2a881ada3a9a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-802f6003-69b3-4337-9652-641263d5864f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.158 2 DEBUG oslo_concurrency.lockutils [req-a2cd1f91-5650-4007-92bf-74439066a751 req-21232956-46c9-4744-a546-2a881ada3a9a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-802f6003-69b3-4337-9652-641263d5864f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.158 2 DEBUG nova.network.neutron [req-a2cd1f91-5650-4007-92bf-74439066a751 req-21232956-46c9-4744-a546-2a881ada3a9a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Refreshing network info cache for port 2758f8fe-aff6-42fb-9786-112689a5d452 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.161 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:42:01 compute-0 nova_compute[194781]:   <uuid>802f6003-69b3-4337-9652-641263d5864f</uuid>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   <name>instance-00000008</name>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   <memory>131072</memory>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <nova:name>tempest-ServersTestManualDisk-server-690272599</nova:name>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:42:01</nova:creationTime>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <nova:flavor name="m1.nano">
Oct 02 19:42:01 compute-0 nova_compute[194781]:         <nova:memory>128</nova:memory>
Oct 02 19:42:01 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:42:01 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:42:01 compute-0 nova_compute[194781]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 19:42:01 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:42:01 compute-0 nova_compute[194781]:         <nova:user uuid="e11fb23793a2452993b49534ed668211">tempest-ServersTestManualDisk-1647165318-project-member</nova:user>
Oct 02 19:42:01 compute-0 nova_compute[194781]:         <nova:project uuid="2f53c7449f8e46fb84491ca16ecef449">tempest-ServersTestManualDisk-1647165318</nova:project>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="c191839f-7364-41ce-80c8-eff8077fc750"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:42:01 compute-0 nova_compute[194781]:         <nova:port uuid="2758f8fe-aff6-42fb-9786-112689a5d452">
Oct 02 19:42:01 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <system>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <entry name="serial">802f6003-69b3-4337-9652-641263d5864f</entry>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <entry name="uuid">802f6003-69b3-4337-9652-641263d5864f</entry>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     </system>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   <os>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   </os>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   <features>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   </features>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/disk"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/disk.config"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:72:4f:b3"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <target dev="tap2758f8fe-af"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/console.log" append="off"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <video>
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     </video>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:42:01 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:42:01 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:42:01 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:42:01 compute-0 nova_compute[194781]: </domain>
Oct 02 19:42:01 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.162 2 DEBUG nova.compute.manager [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Preparing to wait for external event network-vif-plugged-2758f8fe-aff6-42fb-9786-112689a5d452 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.162 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Acquiring lock "802f6003-69b3-4337-9652-641263d5864f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.162 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.162 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.163 2 DEBUG nova.virt.libvirt.vif [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:41:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-690272599',display_name='tempest-ServersTestManualDisk-server-690272599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-690272599',id=8,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHqaX30hCJdLmbOfxgofGE14eqhapGuSxNbf8004P16cyFX+BDeR0BOc8E0L54R4mGxNJDr8fyZr+4oTbD/zyFtWB/zaHTHBsmExBW6jXPw9zFL+x3sOHyE0zXP3jIqk3Q==',key_name='tempest-keypair-460882991',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2f53c7449f8e46fb84491ca16ecef449',ramdisk_id='',reservation_id='r-cg2fwbiz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1647165318',owner_user_name='tempest-ServersTestManualDisk-1647165318-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:41:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e11fb23793a2452993b49534ed668211',uuid=802f6003-69b3-4337-9652-641263d5864f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2758f8fe-aff6-42fb-9786-112689a5d452", "address": "fa:16:3e:72:4f:b3", "network": {"id": "8a351c78-806e-4438-a270-95c4b5a89d4d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1274704635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f53c7449f8e46fb84491ca16ecef449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2758f8fe-af", "ovs_interfaceid": "2758f8fe-aff6-42fb-9786-112689a5d452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.163 2 DEBUG nova.network.os_vif_util [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Converting VIF {"id": "2758f8fe-aff6-42fb-9786-112689a5d452", "address": "fa:16:3e:72:4f:b3", "network": {"id": "8a351c78-806e-4438-a270-95c4b5a89d4d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1274704635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f53c7449f8e46fb84491ca16ecef449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2758f8fe-af", "ovs_interfaceid": "2758f8fe-aff6-42fb-9786-112689a5d452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.164 2 DEBUG nova.network.os_vif_util [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:4f:b3,bridge_name='br-int',has_traffic_filtering=True,id=2758f8fe-aff6-42fb-9786-112689a5d452,network=Network(8a351c78-806e-4438-a270-95c4b5a89d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2758f8fe-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.164 2 DEBUG os_vif [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:4f:b3,bridge_name='br-int',has_traffic_filtering=True,id=2758f8fe-aff6-42fb-9786-112689a5d452,network=Network(8a351c78-806e-4438-a270-95c4b5a89d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2758f8fe-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.166 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.166 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.170 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2758f8fe-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.170 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2758f8fe-af, col_values=(('external_ids', {'iface-id': '2758f8fe-aff6-42fb-9786-112689a5d452', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:4f:b3', 'vm-uuid': '802f6003-69b3-4337-9652-641263d5864f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:01 compute-0 NetworkManager[52324]: <info>  [1759434121.1763] manager: (tap2758f8fe-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.181 2 INFO os_vif [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:4f:b3,bridge_name='br-int',has_traffic_filtering=True,id=2758f8fe-aff6-42fb-9786-112689a5d452,network=Network(8a351c78-806e-4438-a270-95c4b5a89d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2758f8fe-af')
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.247 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.248 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.249 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] No VIF found with MAC fa:16:3e:72:4f:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.249 2 INFO nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Using config drive
Oct 02 19:42:01 compute-0 openstack_network_exporter[211160]: ERROR   19:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:42:01 compute-0 openstack_network_exporter[211160]: ERROR   19:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:42:01 compute-0 openstack_network_exporter[211160]: ERROR   19:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:42:01 compute-0 openstack_network_exporter[211160]: ERROR   19:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:42:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:42:01 compute-0 openstack_network_exporter[211160]: ERROR   19:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:42:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.748 2 INFO nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Creating config drive at /var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/disk.config
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.753 2 DEBUG oslo_concurrency.processutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5h6tbryn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.876 2 DEBUG oslo_concurrency.processutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5h6tbryn" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:01 compute-0 kernel: tap2758f8fe-af: entered promiscuous mode
Oct 02 19:42:01 compute-0 ovn_controller[97052]: 2025-10-02T19:42:01Z|00074|binding|INFO|Claiming lport 2758f8fe-aff6-42fb-9786-112689a5d452 for this chassis.
Oct 02 19:42:01 compute-0 NetworkManager[52324]: <info>  [1759434121.9563] manager: (tap2758f8fe-af): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct 02 19:42:01 compute-0 ovn_controller[97052]: 2025-10-02T19:42:01Z|00075|binding|INFO|2758f8fe-aff6-42fb-9786-112689a5d452: Claiming fa:16:3e:72:4f:b3 10.100.0.6
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:01.966 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:4f:b3 10.100.0.6'], port_security=['fa:16:3e:72:4f:b3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '802f6003-69b3-4337-9652-641263d5864f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a351c78-806e-4438-a270-95c4b5a89d4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2f53c7449f8e46fb84491ca16ecef449', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dff9e43b-8314-4f25-a289-4dacbe747f4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e0e619b-7ec9-4af4-aa82-90a8356f1ae8, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=2758f8fe-aff6-42fb-9786-112689a5d452) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:42:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:01.967 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 2758f8fe-aff6-42fb-9786-112689a5d452 in datapath 8a351c78-806e-4438-a270-95c4b5a89d4d bound to our chassis
Oct 02 19:42:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:01.969 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8a351c78-806e-4438-a270-95c4b5a89d4d
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:01 compute-0 ovn_controller[97052]: 2025-10-02T19:42:01Z|00076|binding|INFO|Setting lport 2758f8fe-aff6-42fb-9786-112689a5d452 ovn-installed in OVS
Oct 02 19:42:01 compute-0 ovn_controller[97052]: 2025-10-02T19:42:01Z|00077|binding|INFO|Setting lport 2758f8fe-aff6-42fb-9786-112689a5d452 up in Southbound
Oct 02 19:42:01 compute-0 nova_compute[194781]: 2025-10-02 19:42:01.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:01.981 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[b6207117-ab39-4aeb-916a-5015ce2b41fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:01.982 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8a351c78-81 in ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 19:42:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:01.984 246899 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8a351c78-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 19:42:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:01.984 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[60c5136b-8de7-4cd1-a690-b49cd9b43914]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:01.985 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[4f03c4ea-e100-42ad-a0f0-a269197799a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:01 compute-0 systemd-udevd[258336]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:42:01 compute-0 systemd-machined[154795]: New machine qemu-8-instance-00000008.
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.007 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[c8238c04-89c1-4f95-aa63-d4903ada25bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:02 compute-0 NetworkManager[52324]: <info>  [1759434122.0088] device (tap2758f8fe-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:42:02 compute-0 NetworkManager[52324]: <info>  [1759434122.0097] device (tap2758f8fe-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:42:02 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.034 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[7903d0c8-bec5-4a99-bb0e-8589781415c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.062 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f03690-0254-40ea-87a5-24b43dda20a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:02 compute-0 systemd-udevd[258339]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:42:02 compute-0 NetworkManager[52324]: <info>  [1759434122.0688] manager: (tap8a351c78-80): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.068 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[93301071-56cd-4f98-b893-115d794c3994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.103 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[74ce8581-5a69-4a35-a1e9-91bc776cea8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.106 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1d5b17-030a-4a70-ab2e-7f948478f650]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:02 compute-0 NetworkManager[52324]: <info>  [1759434122.1328] device (tap8a351c78-80): carrier: link connected
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.141 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[4de8378e-e764-4065-9ca7-c5ab59e39555]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.162 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[4540614c-e402-4068-ad73-fe43f78a8d7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a351c78-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:43:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528275, 'reachable_time': 32464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258369, 'error': None, 'target': 'ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.182 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[ce893bfe-9da0-4b34-847f-84eabfd10929]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec0:4371'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528275, 'tstamp': 528275}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258371, 'error': None, 'target': 'ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.209 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[3c028371-91da-419c-bc4f-f42fba81b5f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a351c78-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:43:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528275, 'reachable_time': 32464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258372, 'error': None, 'target': 'ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.246 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[d7424362-5dd5-41db-a1ce-456e340ab58f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.310 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[8011649d-f3f2-4e78-a211-d140b50ae2c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.311 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a351c78-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.312 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.312 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a351c78-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:02 compute-0 nova_compute[194781]: 2025-10-02 19:42:02.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:02 compute-0 kernel: tap8a351c78-80: entered promiscuous mode
Oct 02 19:42:02 compute-0 NetworkManager[52324]: <info>  [1759434122.3165] manager: (tap8a351c78-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.325 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8a351c78-80, col_values=(('external_ids', {'iface-id': 'a8c22f72-a3ec-481d-9eec-24f5951376c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:02 compute-0 nova_compute[194781]: 2025-10-02 19:42:02.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:02 compute-0 ovn_controller[97052]: 2025-10-02T19:42:02Z|00078|binding|INFO|Releasing lport a8c22f72-a3ec-481d-9eec-24f5951376c0 from this chassis (sb_readonly=0)
Oct 02 19:42:02 compute-0 nova_compute[194781]: 2025-10-02 19:42:02.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.330 105943 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8a351c78-806e-4438-a270-95c4b5a89d4d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8a351c78-806e-4438-a270-95c4b5a89d4d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.331 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[69932efb-5779-456e-8b46-b2a8253384f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.332 105943 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: global
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     log         /dev/log local0 debug
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     log-tag     haproxy-metadata-proxy-8a351c78-806e-4438-a270-95c4b5a89d4d
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     user        root
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     group       root
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     maxconn     1024
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     pidfile     /var/lib/neutron/external/pids/8a351c78-806e-4438-a270-95c4b5a89d4d.pid.haproxy
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     daemon
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: defaults
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     log global
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     mode http
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     option httplog
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     option dontlognull
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     option http-server-close
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     option forwardfor
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     retries                 3
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     timeout http-request    30s
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     timeout connect         30s
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     timeout client          32s
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     timeout server          32s
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     timeout http-keep-alive 30s
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: listen listener
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     bind 169.254.169.254:80
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:     http-request add-header X-OVN-Network-ID 8a351c78-806e-4438-a270-95c4b5a89d4d
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 19:42:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:02.333 105943 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d', 'env', 'PROCESS_TAG=haproxy-8a351c78-806e-4438-a270-95c4b5a89d4d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8a351c78-806e-4438-a270-95c4b5a89d4d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 19:42:02 compute-0 nova_compute[194781]: 2025-10-02 19:42:02.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:02 compute-0 podman[258411]: 2025-10-02 19:42:02.780908725 +0000 UTC m=+0.069486937 container create 7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 19:42:02 compute-0 systemd[1]: Started libpod-conmon-7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81.scope.
Oct 02 19:42:02 compute-0 podman[258411]: 2025-10-02 19:42:02.745772461 +0000 UTC m=+0.034350763 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:42:02 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ad5677af026dde445e001194fc8508e1661f6fbcc86f2e14b13eaee31164df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 19:42:02 compute-0 podman[258411]: 2025-10-02 19:42:02.877726208 +0000 UTC m=+0.166304440 container init 7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:42:02 compute-0 podman[258411]: 2025-10-02 19:42:02.885742641 +0000 UTC m=+0.174320853 container start 7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 19:42:02 compute-0 neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d[258426]: [NOTICE]   (258430) : New worker (258432) forked
Oct 02 19:42:02 compute-0 neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d[258426]: [NOTICE]   (258430) : Loading success.
Oct 02 19:42:02 compute-0 nova_compute[194781]: 2025-10-02 19:42:02.995 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434122.995092, 802f6003-69b3-4337-9652-641263d5864f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:42:02 compute-0 nova_compute[194781]: 2025-10-02 19:42:02.996 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 802f6003-69b3-4337-9652-641263d5864f] VM Started (Lifecycle Event)
Oct 02 19:42:03 compute-0 nova_compute[194781]: 2025-10-02 19:42:03.020 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 802f6003-69b3-4337-9652-641263d5864f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:03 compute-0 nova_compute[194781]: 2025-10-02 19:42:03.025 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434122.9952173, 802f6003-69b3-4337-9652-641263d5864f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:42:03 compute-0 nova_compute[194781]: 2025-10-02 19:42:03.026 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 802f6003-69b3-4337-9652-641263d5864f] VM Paused (Lifecycle Event)
Oct 02 19:42:03 compute-0 nova_compute[194781]: 2025-10-02 19:42:03.044 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 802f6003-69b3-4337-9652-641263d5864f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:03 compute-0 nova_compute[194781]: 2025-10-02 19:42:03.049 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 802f6003-69b3-4337-9652-641263d5864f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:42:03 compute-0 nova_compute[194781]: 2025-10-02 19:42:03.069 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 802f6003-69b3-4337-9652-641263d5864f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:42:03 compute-0 nova_compute[194781]: 2025-10-02 19:42:03.624 2 DEBUG nova.network.neutron [req-a2cd1f91-5650-4007-92bf-74439066a751 req-21232956-46c9-4744-a546-2a881ada3a9a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Updated VIF entry in instance network info cache for port 2758f8fe-aff6-42fb-9786-112689a5d452. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:42:03 compute-0 nova_compute[194781]: 2025-10-02 19:42:03.625 2 DEBUG nova.network.neutron [req-a2cd1f91-5650-4007-92bf-74439066a751 req-21232956-46c9-4744-a546-2a881ada3a9a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Updating instance_info_cache with network_info: [{"id": "2758f8fe-aff6-42fb-9786-112689a5d452", "address": "fa:16:3e:72:4f:b3", "network": {"id": "8a351c78-806e-4438-a270-95c4b5a89d4d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1274704635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f53c7449f8e46fb84491ca16ecef449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2758f8fe-af", "ovs_interfaceid": "2758f8fe-aff6-42fb-9786-112689a5d452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:42:03 compute-0 nova_compute[194781]: 2025-10-02 19:42:03.639 2 DEBUG oslo_concurrency.lockutils [req-a2cd1f91-5650-4007-92bf-74439066a751 req-21232956-46c9-4744-a546-2a881ada3a9a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-802f6003-69b3-4337-9652-641263d5864f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.109 2 DEBUG nova.compute.manager [req-b6840094-72a4-40bd-a981-3359110a92f8 req-f8e86959-49dc-461b-adb0-97b2fe41216d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Received event network-vif-plugged-5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.110 2 DEBUG oslo_concurrency.lockutils [req-b6840094-72a4-40bd-a981-3359110a92f8 req-f8e86959-49dc-461b-adb0-97b2fe41216d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "8c3516d0-e1db-4043-8054-0efaf55f8158-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.111 2 DEBUG oslo_concurrency.lockutils [req-b6840094-72a4-40bd-a981-3359110a92f8 req-f8e86959-49dc-461b-adb0-97b2fe41216d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "8c3516d0-e1db-4043-8054-0efaf55f8158-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.111 2 DEBUG oslo_concurrency.lockutils [req-b6840094-72a4-40bd-a981-3359110a92f8 req-f8e86959-49dc-461b-adb0-97b2fe41216d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "8c3516d0-e1db-4043-8054-0efaf55f8158-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.112 2 DEBUG nova.compute.manager [req-b6840094-72a4-40bd-a981-3359110a92f8 req-f8e86959-49dc-461b-adb0-97b2fe41216d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Processing event network-vif-plugged-5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.113 2 DEBUG nova.compute.manager [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.120 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434124.1196241, 8c3516d0-e1db-4043-8054-0efaf55f8158 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.121 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] VM Resumed (Lifecycle Event)
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.122 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.131 2 INFO nova.virt.libvirt.driver [-] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Instance spawned successfully.
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.132 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.155 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.161 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.175 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.175 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.175 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.176 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.176 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.176 2 DEBUG nova.virt.libvirt.driver [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.180 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.236 2 INFO nova.compute.manager [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Took 11.55 seconds to spawn the instance on the hypervisor.
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.236 2 DEBUG nova.compute.manager [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.331 2 INFO nova.compute.manager [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Took 12.09 seconds to build instance.
Oct 02 19:42:04 compute-0 nova_compute[194781]: 2025-10-02 19:42:04.380 2 DEBUG oslo_concurrency.lockutils [None req-6c7d8370-8bc3-43e1-860d-b98b0db157f1 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "8c3516d0-e1db-4043-8054-0efaf55f8158" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:05 compute-0 podman[258442]: 2025-10-02 19:42:05.72654712 +0000 UTC m=+0.095255382 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:42:05 compute-0 podman[258441]: 2025-10-02 19:42:05.739316589 +0000 UTC m=+0.112382927 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:42:05 compute-0 nova_compute[194781]: 2025-10-02 19:42:05.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:06 compute-0 nova_compute[194781]: 2025-10-02 19:42:06.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.179 2 DEBUG nova.compute.manager [req-2139f830-b5c6-4ec9-b163-3a1f15a87c3e req-4844353c-4a72-4533-bfff-54f47e4632c9 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Received event network-vif-plugged-5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.180 2 DEBUG oslo_concurrency.lockutils [req-2139f830-b5c6-4ec9-b163-3a1f15a87c3e req-4844353c-4a72-4533-bfff-54f47e4632c9 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "8c3516d0-e1db-4043-8054-0efaf55f8158-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.181 2 DEBUG oslo_concurrency.lockutils [req-2139f830-b5c6-4ec9-b163-3a1f15a87c3e req-4844353c-4a72-4533-bfff-54f47e4632c9 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "8c3516d0-e1db-4043-8054-0efaf55f8158-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.182 2 DEBUG oslo_concurrency.lockutils [req-2139f830-b5c6-4ec9-b163-3a1f15a87c3e req-4844353c-4a72-4533-bfff-54f47e4632c9 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "8c3516d0-e1db-4043-8054-0efaf55f8158-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.183 2 DEBUG nova.compute.manager [req-2139f830-b5c6-4ec9-b163-3a1f15a87c3e req-4844353c-4a72-4533-bfff-54f47e4632c9 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] No waiting events found dispatching network-vif-plugged-5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.184 2 WARNING nova.compute.manager [req-2139f830-b5c6-4ec9-b163-3a1f15a87c3e req-4844353c-4a72-4533-bfff-54f47e4632c9 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Received unexpected event network-vif-plugged-5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d for instance with vm_state active and task_state None.
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.185 2 DEBUG nova.compute.manager [req-2139f830-b5c6-4ec9-b163-3a1f15a87c3e req-4844353c-4a72-4533-bfff-54f47e4632c9 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received event network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.186 2 DEBUG oslo_concurrency.lockutils [req-2139f830-b5c6-4ec9-b163-3a1f15a87c3e req-4844353c-4a72-4533-bfff-54f47e4632c9 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.187 2 DEBUG oslo_concurrency.lockutils [req-2139f830-b5c6-4ec9-b163-3a1f15a87c3e req-4844353c-4a72-4533-bfff-54f47e4632c9 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.187 2 DEBUG oslo_concurrency.lockutils [req-2139f830-b5c6-4ec9-b163-3a1f15a87c3e req-4844353c-4a72-4533-bfff-54f47e4632c9 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.188 2 DEBUG nova.compute.manager [req-2139f830-b5c6-4ec9-b163-3a1f15a87c3e req-4844353c-4a72-4533-bfff-54f47e4632c9 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Processing event network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.190 2 DEBUG nova.compute.manager [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.197 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434127.197341, 6eada58a-d077-43e5-ab40-dd45abbe38f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.198 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] VM Resumed (Lifecycle Event)
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.202 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.211 2 INFO nova.virt.libvirt.driver [-] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Instance spawned successfully.
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.212 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.227 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.239 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.250 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.253 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.258 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.259 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.265 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.267 2 DEBUG nova.virt.libvirt.driver [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.273 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.337 2 INFO nova.compute.manager [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Took 17.83 seconds to spawn the instance on the hypervisor.
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.337 2 DEBUG nova.compute.manager [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.410 2 INFO nova.compute.manager [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Took 18.37 seconds to build instance.
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.424 2 DEBUG oslo_concurrency.lockutils [None req-0b60ed8a-2bda-4d3c-ae59-5230eaff4f9a 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.857 2 DEBUG nova.compute.manager [req-b698f8f4-0539-4e95-8517-f92380b4c6ee req-435cc91c-0752-41f8-879f-ca90c07381a4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Received event network-changed-5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.857 2 DEBUG nova.compute.manager [req-b698f8f4-0539-4e95-8517-f92380b4c6ee req-435cc91c-0752-41f8-879f-ca90c07381a4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Refreshing instance network info cache due to event network-changed-5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.857 2 DEBUG oslo_concurrency.lockutils [req-b698f8f4-0539-4e95-8517-f92380b4c6ee req-435cc91c-0752-41f8-879f-ca90c07381a4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-8c3516d0-e1db-4043-8054-0efaf55f8158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.858 2 DEBUG oslo_concurrency.lockutils [req-b698f8f4-0539-4e95-8517-f92380b4c6ee req-435cc91c-0752-41f8-879f-ca90c07381a4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-8c3516d0-e1db-4043-8054-0efaf55f8158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:42:07 compute-0 nova_compute[194781]: 2025-10-02 19:42:07.858 2 DEBUG nova.network.neutron [req-b698f8f4-0539-4e95-8517-f92380b4c6ee req-435cc91c-0752-41f8-879f-ca90c07381a4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Refreshing network info cache for port 5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:42:08 compute-0 nova_compute[194781]: 2025-10-02 19:42:08.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:09 compute-0 ovn_controller[97052]: 2025-10-02T19:42:09Z|00079|binding|INFO|Releasing lport bd80466a-6146-45a7-be35-ec332e1ee93c from this chassis (sb_readonly=0)
Oct 02 19:42:09 compute-0 ovn_controller[97052]: 2025-10-02T19:42:09Z|00080|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:42:09 compute-0 ovn_controller[97052]: 2025-10-02T19:42:09Z|00081|binding|INFO|Releasing lport a8c22f72-a3ec-481d-9eec-24f5951376c0 from this chassis (sb_readonly=0)
Oct 02 19:42:09 compute-0 ovn_controller[97052]: 2025-10-02T19:42:09Z|00082|binding|INFO|Releasing lport 74eec250-433e-49a6-99c0-57cb4cde4831 from this chassis (sb_readonly=0)
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.717 2 DEBUG oslo_concurrency.lockutils [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Acquiring lock "8c3516d0-e1db-4043-8054-0efaf55f8158" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.718 2 DEBUG oslo_concurrency.lockutils [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "8c3516d0-e1db-4043-8054-0efaf55f8158" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.718 2 DEBUG oslo_concurrency.lockutils [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Acquiring lock "8c3516d0-e1db-4043-8054-0efaf55f8158-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.718 2 DEBUG oslo_concurrency.lockutils [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "8c3516d0-e1db-4043-8054-0efaf55f8158-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.719 2 DEBUG oslo_concurrency.lockutils [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "8c3516d0-e1db-4043-8054-0efaf55f8158-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.720 2 INFO nova.compute.manager [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Terminating instance
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.722 2 DEBUG nova.compute.manager [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:42:09 compute-0 podman[258482]: 2025-10-02 19:42:09.740543226 +0000 UTC m=+0.092927411 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_id=edpm, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:42:09 compute-0 podman[258481]: 2025-10-02 19:42:09.743530745 +0000 UTC m=+0.105726941 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container)
Oct 02 19:42:09 compute-0 kernel: tap5a48a8a2-e2 (unregistering): left promiscuous mode
Oct 02 19:42:09 compute-0 NetworkManager[52324]: <info>  [1759434129.7614] device (tap5a48a8a2-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:42:09 compute-0 podman[258483]: 2025-10-02 19:42:09.76591117 +0000 UTC m=+0.125779324 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:42:09 compute-0 ovn_controller[97052]: 2025-10-02T19:42:09Z|00083|binding|INFO|Releasing lport 5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d from this chassis (sb_readonly=0)
Oct 02 19:42:09 compute-0 ovn_controller[97052]: 2025-10-02T19:42:09Z|00084|binding|INFO|Setting lport 5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d down in Southbound
Oct 02 19:42:09 compute-0 ovn_controller[97052]: 2025-10-02T19:42:09Z|00085|binding|INFO|Removing iface tap5a48a8a2-e2 ovn-installed in OVS
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.784 2 DEBUG nova.compute.manager [req-8138667e-a75a-4255-a328-09a9158d116e req-37a68e32-c99d-4df4-accd-4e60749c628a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received event network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.784 2 DEBUG oslo_concurrency.lockutils [req-8138667e-a75a-4255-a328-09a9158d116e req-37a68e32-c99d-4df4-accd-4e60749c628a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.784 2 DEBUG oslo_concurrency.lockutils [req-8138667e-a75a-4255-a328-09a9158d116e req-37a68e32-c99d-4df4-accd-4e60749c628a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.785 2 DEBUG oslo_concurrency.lockutils [req-8138667e-a75a-4255-a328-09a9158d116e req-37a68e32-c99d-4df4-accd-4e60749c628a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.785 2 DEBUG nova.compute.manager [req-8138667e-a75a-4255-a328-09a9158d116e req-37a68e32-c99d-4df4-accd-4e60749c628a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] No waiting events found dispatching network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.785 2 WARNING nova.compute.manager [req-8138667e-a75a-4255-a328-09a9158d116e req-37a68e32-c99d-4df4-accd-4e60749c628a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received unexpected event network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 for instance with vm_state active and task_state None.
Oct 02 19:42:09 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:09.788 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:56:eb 10.100.0.7'], port_security=['fa:16:3e:ab:56:eb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8c3516d0-e1db-4043-8054-0efaf55f8158', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-297d3600-5c6c-4db6-8640-a20cc0215d99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31d647ffb4e42d1acec402a98b5d8c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be514375-c77b-41ee-bc81-d536a625090a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c41abe9-f69b-4cd8-8e79-3a3c37342998, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:42:09 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:09.790 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d in datapath 297d3600-5c6c-4db6-8640-a20cc0215d99 unbound from our chassis
Oct 02 19:42:09 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:09.791 105943 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 297d3600-5c6c-4db6-8640-a20cc0215d99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 19:42:09 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:09.794 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[98634a61-02ac-4b02-a1ec-663daf6e3596]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:09 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:09.795 105943 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99 namespace which is not needed anymore
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:09 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 02 19:42:09 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000007.scope: Consumed 6.871s CPU time.
Oct 02 19:42:09 compute-0 systemd-machined[154795]: Machine qemu-6-instance-00000007 terminated.
Oct 02 19:42:09 compute-0 neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99[258207]: [NOTICE]   (258211) : haproxy version is 2.8.14-c23fe91
Oct 02 19:42:09 compute-0 neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99[258207]: [NOTICE]   (258211) : path to executable is /usr/sbin/haproxy
Oct 02 19:42:09 compute-0 neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99[258207]: [WARNING]  (258211) : Exiting Master process...
Oct 02 19:42:09 compute-0 neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99[258207]: [WARNING]  (258211) : Exiting Master process...
Oct 02 19:42:09 compute-0 neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99[258207]: [ALERT]    (258211) : Current worker (258213) exited with code 143 (Terminated)
Oct 02 19:42:09 compute-0 neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99[258207]: [WARNING]  (258211) : All workers exited. Exiting... (0)
Oct 02 19:42:09 compute-0 systemd[1]: libpod-2f92129f74c3be744f2fef5b362f75e06d7779e1e38473c6feb423663e0d06d0.scope: Deactivated successfully.
Oct 02 19:42:09 compute-0 podman[258561]: 2025-10-02 19:42:09.951977274 +0000 UTC m=+0.057187021 container died 2f92129f74c3be744f2fef5b362f75e06d7779e1e38473c6feb423663e0d06d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.992 2 INFO nova.virt.libvirt.driver [-] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Instance destroyed successfully.
Oct 02 19:42:09 compute-0 nova_compute[194781]: 2025-10-02 19:42:09.993 2 DEBUG nova.objects.instance [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lazy-loading 'resources' on Instance uuid 8c3516d0-e1db-4043-8054-0efaf55f8158 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:42:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f92129f74c3be744f2fef5b362f75e06d7779e1e38473c6feb423663e0d06d0-userdata-shm.mount: Deactivated successfully.
Oct 02 19:42:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fa99f545af99ada6f797850e3ea262f89b1d45e0b20487124a2103bfb2ecd52-merged.mount: Deactivated successfully.
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.006 2 DEBUG nova.virt.libvirt.vif [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:41:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-81202730',display_name='tempest-ServersTestJSON-server-81202730',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-81202730',id=7,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOCQcro1hwcfjg7GB90X95/03ec50Xm2PEfPdqjeZYQYKY9bbVvij0sSoEius/UBfyPPI9I1ThZw1xzFqjYDKw5BN5UcEhWKWa0l3gBzTf1ncxRbtf7XpQ+EWfdiquJHpw==',key_name='tempest-keypair-1857723335',keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:42:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a31d647ffb4e42d1acec402a98b5d8c9',ramdisk_id='',reservation_id='r-k2rl1dpw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1875586447',owner_user_name='tempest-ServersTestJSON-1875586447-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:42:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18a01f0516b04f26b8bbb33e72f1f51f',uuid=8c3516d0-e1db-4043-8054-0efaf55f8158,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "address": "fa:16:3e:ab:56:eb", "network": {"id": "297d3600-5c6c-4db6-8640-a20cc0215d99", "bridge": "br-int", "label": "tempest-ServersTestJSON-1679256995-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31d647ffb4e42d1acec402a98b5d8c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a48a8a2-e2", "ovs_interfaceid": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.007 2 DEBUG nova.network.os_vif_util [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Converting VIF {"id": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "address": "fa:16:3e:ab:56:eb", "network": {"id": "297d3600-5c6c-4db6-8640-a20cc0215d99", "bridge": "br-int", "label": "tempest-ServersTestJSON-1679256995-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31d647ffb4e42d1acec402a98b5d8c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a48a8a2-e2", "ovs_interfaceid": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.007 2 DEBUG nova.network.os_vif_util [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:56:eb,bridge_name='br-int',has_traffic_filtering=True,id=5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d,network=Network(297d3600-5c6c-4db6-8640-a20cc0215d99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a48a8a2-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.008 2 DEBUG os_vif [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:56:eb,bridge_name='br-int',has_traffic_filtering=True,id=5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d,network=Network(297d3600-5c6c-4db6-8640-a20cc0215d99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a48a8a2-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a48a8a2-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.016 2 INFO os_vif [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:56:eb,bridge_name='br-int',has_traffic_filtering=True,id=5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d,network=Network(297d3600-5c6c-4db6-8640-a20cc0215d99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a48a8a2-e2')
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.016 2 INFO nova.virt.libvirt.driver [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Deleting instance files /var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158_del
Oct 02 19:42:10 compute-0 podman[258561]: 2025-10-02 19:42:10.017003082 +0000 UTC m=+0.122212829 container cleanup 2f92129f74c3be744f2fef5b362f75e06d7779e1e38473c6feb423663e0d06d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.017 2 INFO nova.virt.libvirt.driver [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Deletion of /var/lib/nova/instances/8c3516d0-e1db-4043-8054-0efaf55f8158_del complete
Oct 02 19:42:10 compute-0 systemd[1]: libpod-conmon-2f92129f74c3be744f2fef5b362f75e06d7779e1e38473c6feb423663e0d06d0.scope: Deactivated successfully.
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.091 2 DEBUG nova.network.neutron [req-b698f8f4-0539-4e95-8517-f92380b4c6ee req-435cc91c-0752-41f8-879f-ca90c07381a4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Updated VIF entry in instance network info cache for port 5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.092 2 DEBUG nova.network.neutron [req-b698f8f4-0539-4e95-8517-f92380b4c6ee req-435cc91c-0752-41f8-879f-ca90c07381a4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Updating instance_info_cache with network_info: [{"id": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "address": "fa:16:3e:ab:56:eb", "network": {"id": "297d3600-5c6c-4db6-8640-a20cc0215d99", "bridge": "br-int", "label": "tempest-ServersTestJSON-1679256995-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31d647ffb4e42d1acec402a98b5d8c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a48a8a2-e2", "ovs_interfaceid": "5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.098 2 INFO nova.compute.manager [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Took 0.38 seconds to destroy the instance on the hypervisor.
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.098 2 DEBUG oslo.service.loopingcall [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.099 2 DEBUG nova.compute.manager [-] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.099 2 DEBUG nova.network.neutron [-] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:42:10 compute-0 podman[258607]: 2025-10-02 19:42:10.103584953 +0000 UTC m=+0.061575327 container remove 2f92129f74c3be744f2fef5b362f75e06d7779e1e38473c6feb423663e0d06d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.109 2 DEBUG oslo_concurrency.lockutils [req-b698f8f4-0539-4e95-8517-f92380b4c6ee req-435cc91c-0752-41f8-879f-ca90c07381a4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-8c3516d0-e1db-4043-8054-0efaf55f8158" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:42:10 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:10.113 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[87889cc1-5dc3-4124-a8d9-6cddf251c31f]: (4, ('Thu Oct  2 07:42:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99 (2f92129f74c3be744f2fef5b362f75e06d7779e1e38473c6feb423663e0d06d0)\n2f92129f74c3be744f2fef5b362f75e06d7779e1e38473c6feb423663e0d06d0\nThu Oct  2 07:42:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99 (2f92129f74c3be744f2fef5b362f75e06d7779e1e38473c6feb423663e0d06d0)\n2f92129f74c3be744f2fef5b362f75e06d7779e1e38473c6feb423663e0d06d0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:10 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:10.114 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[c20834d0-0931-4d8e-9233-d9909f53f916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:10 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:10.116 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap297d3600-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:10 compute-0 kernel: tap297d3600-50: left promiscuous mode
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:10 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:10.136 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[85e43eb0-7975-4299-bd1d-b2d7fb897c69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:10 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:10.177 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[be4980a5-7a73-4f3a-abe3-89077b988d7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:10 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:10.178 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[a1feff66-625d-4286-bac6-a81d41f010a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:10 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:10.194 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[8d88f39d-646f-40fd-abcd-c6a3264bfccf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527820, 'reachable_time': 39350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258622, 'error': None, 'target': 'ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d297d3600\x2d5c6c\x2d4db6\x2d8640\x2da20cc0215d99.mount: Deactivated successfully.
Oct 02 19:42:10 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:10.205 106060 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-297d3600-5c6c-4db6-8640-a20cc0215d99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 19:42:10 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:10.206 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[047a23e8-d12a-44b5-b1b0-7c31fd143a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:10 compute-0 nova_compute[194781]: 2025-10-02 19:42:10.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:11 compute-0 nova_compute[194781]: 2025-10-02 19:42:11.710 2 DEBUG nova.network.neutron [-] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:42:11 compute-0 nova_compute[194781]: 2025-10-02 19:42:11.740 2 INFO nova.compute.manager [-] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Took 1.64 seconds to deallocate network for instance.
Oct 02 19:42:11 compute-0 nova_compute[194781]: 2025-10-02 19:42:11.791 2 DEBUG nova.compute.manager [req-c1e91b05-1094-46c2-8b7a-ed6f2dfda082 req-6882d5fc-6ab1-4e8f-a78a-019e7e120467 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Received event network-vif-deleted-5a48a8a2-e2b6-40ee-a894-eccb9d15ce8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:11 compute-0 nova_compute[194781]: 2025-10-02 19:42:11.804 2 DEBUG oslo_concurrency.lockutils [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:11 compute-0 nova_compute[194781]: 2025-10-02 19:42:11.805 2 DEBUG oslo_concurrency.lockutils [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:11 compute-0 nova_compute[194781]: 2025-10-02 19:42:11.936 2 DEBUG nova.compute.provider_tree [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:42:11 compute-0 nova_compute[194781]: 2025-10-02 19:42:11.952 2 DEBUG nova.scheduler.client.report [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:42:11 compute-0 nova_compute[194781]: 2025-10-02 19:42:11.984 2 DEBUG oslo_concurrency.lockutils [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:12 compute-0 nova_compute[194781]: 2025-10-02 19:42:12.029 2 INFO nova.scheduler.client.report [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Deleted allocations for instance 8c3516d0-e1db-4043-8054-0efaf55f8158
Oct 02 19:42:12 compute-0 nova_compute[194781]: 2025-10-02 19:42:12.110 2 DEBUG oslo_concurrency.lockutils [None req-742078d8-bf10-4533-8324-cfe6ae2b17cd 18a01f0516b04f26b8bbb33e72f1f51f a31d647ffb4e42d1acec402a98b5d8c9 - - default default] Lock "8c3516d0-e1db-4043-8054-0efaf55f8158" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.737 2 DEBUG nova.compute.manager [req-3a02f4de-d995-4c14-935a-206d6e4cb39b req-aeb5f376-e66f-440e-9b6f-9f84e86eb9f6 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Received event network-vif-plugged-2758f8fe-aff6-42fb-9786-112689a5d452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.738 2 DEBUG oslo_concurrency.lockutils [req-3a02f4de-d995-4c14-935a-206d6e4cb39b req-aeb5f376-e66f-440e-9b6f-9f84e86eb9f6 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "802f6003-69b3-4337-9652-641263d5864f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.739 2 DEBUG oslo_concurrency.lockutils [req-3a02f4de-d995-4c14-935a-206d6e4cb39b req-aeb5f376-e66f-440e-9b6f-9f84e86eb9f6 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.740 2 DEBUG oslo_concurrency.lockutils [req-3a02f4de-d995-4c14-935a-206d6e4cb39b req-aeb5f376-e66f-440e-9b6f-9f84e86eb9f6 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.741 2 DEBUG nova.compute.manager [req-3a02f4de-d995-4c14-935a-206d6e4cb39b req-aeb5f376-e66f-440e-9b6f-9f84e86eb9f6 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Processing event network-vif-plugged-2758f8fe-aff6-42fb-9786-112689a5d452 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.742 2 DEBUG nova.compute.manager [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Instance event wait completed in 10 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.749 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.749 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434133.7495415, 802f6003-69b3-4337-9652-641263d5864f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.750 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 802f6003-69b3-4337-9652-641263d5864f] VM Resumed (Lifecycle Event)
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.757 2 INFO nova.virt.libvirt.driver [-] [instance: 802f6003-69b3-4337-9652-641263d5864f] Instance spawned successfully.
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.757 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.783 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 802f6003-69b3-4337-9652-641263d5864f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.794 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 802f6003-69b3-4337-9652-641263d5864f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.799 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.800 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.800 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.801 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.802 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.803 2 DEBUG nova.virt.libvirt.driver [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.839 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 802f6003-69b3-4337-9652-641263d5864f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.878 2 INFO nova.compute.manager [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Took 20.90 seconds to spawn the instance on the hypervisor.
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.879 2 DEBUG nova.compute.manager [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.901 2 DEBUG nova.compute.manager [req-2d934f67-9b69-45d4-981d-04c1b197dd15 req-c6bc0cb1-837c-446b-8c6c-b1ec50fb3625 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received event network-changed-b27e7b6f-4ab7-48d9-a674-eb640289b746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.902 2 DEBUG nova.compute.manager [req-2d934f67-9b69-45d4-981d-04c1b197dd15 req-c6bc0cb1-837c-446b-8c6c-b1ec50fb3625 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Refreshing instance network info cache due to event network-changed-b27e7b6f-4ab7-48d9-a674-eb640289b746. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.903 2 DEBUG oslo_concurrency.lockutils [req-2d934f67-9b69-45d4-981d-04c1b197dd15 req-c6bc0cb1-837c-446b-8c6c-b1ec50fb3625 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.903 2 DEBUG oslo_concurrency.lockutils [req-2d934f67-9b69-45d4-981d-04c1b197dd15 req-c6bc0cb1-837c-446b-8c6c-b1ec50fb3625 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.904 2 DEBUG nova.network.neutron [req-2d934f67-9b69-45d4-981d-04c1b197dd15 req-c6bc0cb1-837c-446b-8c6c-b1ec50fb3625 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Refreshing network info cache for port b27e7b6f-4ab7-48d9-a674-eb640289b746 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.957 2 INFO nova.compute.manager [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Took 21.70 seconds to build instance.
Oct 02 19:42:13 compute-0 nova_compute[194781]: 2025-10-02 19:42:13.975 2 DEBUG oslo_concurrency.lockutils [None req-a6086775-90c9-4653-8a0c-5b2996e23dc0 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:14 compute-0 ovn_controller[97052]: 2025-10-02T19:42:14Z|00086|binding|INFO|Releasing lport bd80466a-6146-45a7-be35-ec332e1ee93c from this chassis (sb_readonly=0)
Oct 02 19:42:14 compute-0 ovn_controller[97052]: 2025-10-02T19:42:14Z|00087|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:42:14 compute-0 ovn_controller[97052]: 2025-10-02T19:42:14Z|00088|binding|INFO|Releasing lport a8c22f72-a3ec-481d-9eec-24f5951376c0 from this chassis (sb_readonly=0)
Oct 02 19:42:14 compute-0 nova_compute[194781]: 2025-10-02 19:42:14.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:15 compute-0 nova_compute[194781]: 2025-10-02 19:42:15.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:15 compute-0 nova_compute[194781]: 2025-10-02 19:42:15.921 2 DEBUG nova.compute.manager [req-7f6a60c6-6b5e-47d0-956f-c3f66a9d6adb req-ce123f42-a036-4c19-8043-f4b3bcb51ce5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Received event network-vif-plugged-2758f8fe-aff6-42fb-9786-112689a5d452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:15 compute-0 nova_compute[194781]: 2025-10-02 19:42:15.922 2 DEBUG oslo_concurrency.lockutils [req-7f6a60c6-6b5e-47d0-956f-c3f66a9d6adb req-ce123f42-a036-4c19-8043-f4b3bcb51ce5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "802f6003-69b3-4337-9652-641263d5864f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:15 compute-0 nova_compute[194781]: 2025-10-02 19:42:15.922 2 DEBUG oslo_concurrency.lockutils [req-7f6a60c6-6b5e-47d0-956f-c3f66a9d6adb req-ce123f42-a036-4c19-8043-f4b3bcb51ce5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:15 compute-0 nova_compute[194781]: 2025-10-02 19:42:15.923 2 DEBUG oslo_concurrency.lockutils [req-7f6a60c6-6b5e-47d0-956f-c3f66a9d6adb req-ce123f42-a036-4c19-8043-f4b3bcb51ce5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:15 compute-0 nova_compute[194781]: 2025-10-02 19:42:15.924 2 DEBUG nova.compute.manager [req-7f6a60c6-6b5e-47d0-956f-c3f66a9d6adb req-ce123f42-a036-4c19-8043-f4b3bcb51ce5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] No waiting events found dispatching network-vif-plugged-2758f8fe-aff6-42fb-9786-112689a5d452 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:42:15 compute-0 nova_compute[194781]: 2025-10-02 19:42:15.925 2 WARNING nova.compute.manager [req-7f6a60c6-6b5e-47d0-956f-c3f66a9d6adb req-ce123f42-a036-4c19-8043-f4b3bcb51ce5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Received unexpected event network-vif-plugged-2758f8fe-aff6-42fb-9786-112689a5d452 for instance with vm_state active and task_state None.
Oct 02 19:42:15 compute-0 nova_compute[194781]: 2025-10-02 19:42:15.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:16 compute-0 nova_compute[194781]: 2025-10-02 19:42:16.031 2 DEBUG nova.network.neutron [req-2d934f67-9b69-45d4-981d-04c1b197dd15 req-c6bc0cb1-837c-446b-8c6c-b1ec50fb3625 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Updated VIF entry in instance network info cache for port b27e7b6f-4ab7-48d9-a674-eb640289b746. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:42:16 compute-0 nova_compute[194781]: 2025-10-02 19:42:16.032 2 DEBUG nova.network.neutron [req-2d934f67-9b69-45d4-981d-04c1b197dd15 req-c6bc0cb1-837c-446b-8c6c-b1ec50fb3625 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Updating instance_info_cache with network_info: [{"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:42:16 compute-0 nova_compute[194781]: 2025-10-02 19:42:16.058 2 DEBUG oslo_concurrency.lockutils [req-2d934f67-9b69-45d4-981d-04c1b197dd15 req-c6bc0cb1-837c-446b-8c6c-b1ec50fb3625 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:42:16 compute-0 podman[258626]: 2025-10-02 19:42:16.738484614 +0000 UTC m=+0.114653937 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:42:16 compute-0 podman[258625]: 2025-10-02 19:42:16.790681871 +0000 UTC m=+0.147143231 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:42:16 compute-0 nova_compute[194781]: 2025-10-02 19:42:16.864 2 DEBUG nova.compute.manager [req-06e8c79e-bc51-4e30-9cff-673554e69c26 req-262f1725-2522-4b51-b298-f9c138f0e634 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Received event network-changed-2758f8fe-aff6-42fb-9786-112689a5d452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:16 compute-0 nova_compute[194781]: 2025-10-02 19:42:16.864 2 DEBUG nova.compute.manager [req-06e8c79e-bc51-4e30-9cff-673554e69c26 req-262f1725-2522-4b51-b298-f9c138f0e634 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Refreshing instance network info cache due to event network-changed-2758f8fe-aff6-42fb-9786-112689a5d452. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:42:16 compute-0 nova_compute[194781]: 2025-10-02 19:42:16.864 2 DEBUG oslo_concurrency.lockutils [req-06e8c79e-bc51-4e30-9cff-673554e69c26 req-262f1725-2522-4b51-b298-f9c138f0e634 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-802f6003-69b3-4337-9652-641263d5864f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:42:16 compute-0 nova_compute[194781]: 2025-10-02 19:42:16.864 2 DEBUG oslo_concurrency.lockutils [req-06e8c79e-bc51-4e30-9cff-673554e69c26 req-262f1725-2522-4b51-b298-f9c138f0e634 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-802f6003-69b3-4337-9652-641263d5864f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:42:16 compute-0 nova_compute[194781]: 2025-10-02 19:42:16.864 2 DEBUG nova.network.neutron [req-06e8c79e-bc51-4e30-9cff-673554e69c26 req-262f1725-2522-4b51-b298-f9c138f0e634 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Refreshing network info cache for port 2758f8fe-aff6-42fb-9786-112689a5d452 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:42:18 compute-0 nova_compute[194781]: 2025-10-02 19:42:18.721 2 DEBUG oslo_concurrency.lockutils [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Acquiring lock "802f6003-69b3-4337-9652-641263d5864f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:18 compute-0 nova_compute[194781]: 2025-10-02 19:42:18.721 2 DEBUG oslo_concurrency.lockutils [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:18 compute-0 nova_compute[194781]: 2025-10-02 19:42:18.721 2 DEBUG oslo_concurrency.lockutils [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Acquiring lock "802f6003-69b3-4337-9652-641263d5864f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:18 compute-0 nova_compute[194781]: 2025-10-02 19:42:18.721 2 DEBUG oslo_concurrency.lockutils [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:18 compute-0 nova_compute[194781]: 2025-10-02 19:42:18.722 2 DEBUG oslo_concurrency.lockutils [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:18 compute-0 nova_compute[194781]: 2025-10-02 19:42:18.722 2 INFO nova.compute.manager [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Terminating instance
Oct 02 19:42:18 compute-0 nova_compute[194781]: 2025-10-02 19:42:18.723 2 DEBUG nova.compute.manager [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:42:18 compute-0 kernel: tap2758f8fe-af (unregistering): left promiscuous mode
Oct 02 19:42:18 compute-0 NetworkManager[52324]: <info>  [1759434138.7545] device (tap2758f8fe-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:42:18 compute-0 nova_compute[194781]: 2025-10-02 19:42:18.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:18 compute-0 ovn_controller[97052]: 2025-10-02T19:42:18Z|00089|binding|INFO|Releasing lport 2758f8fe-aff6-42fb-9786-112689a5d452 from this chassis (sb_readonly=0)
Oct 02 19:42:18 compute-0 ovn_controller[97052]: 2025-10-02T19:42:18Z|00090|binding|INFO|Setting lport 2758f8fe-aff6-42fb-9786-112689a5d452 down in Southbound
Oct 02 19:42:18 compute-0 ovn_controller[97052]: 2025-10-02T19:42:18Z|00091|binding|INFO|Removing iface tap2758f8fe-af ovn-installed in OVS
Oct 02 19:42:18 compute-0 nova_compute[194781]: 2025-10-02 19:42:18.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:18 compute-0 nova_compute[194781]: 2025-10-02 19:42:18.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:18 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:18.782 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:4f:b3 10.100.0.6'], port_security=['fa:16:3e:72:4f:b3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '802f6003-69b3-4337-9652-641263d5864f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a351c78-806e-4438-a270-95c4b5a89d4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2f53c7449f8e46fb84491ca16ecef449', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dff9e43b-8314-4f25-a289-4dacbe747f4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e0e619b-7ec9-4af4-aa82-90a8356f1ae8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=2758f8fe-aff6-42fb-9786-112689a5d452) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:42:18 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:18.790 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 2758f8fe-aff6-42fb-9786-112689a5d452 in datapath 8a351c78-806e-4438-a270-95c4b5a89d4d unbound from our chassis
Oct 02 19:42:18 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:18.799 105943 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8a351c78-806e-4438-a270-95c4b5a89d4d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 19:42:18 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:18.800 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[e2383e1f-bb07-4999-8a83-98d9255a2e1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:18 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:18.804 105943 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d namespace which is not needed anymore
Oct 02 19:42:18 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct 02 19:42:18 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 6.331s CPU time.
Oct 02 19:42:18 compute-0 systemd-machined[154795]: Machine qemu-8-instance-00000008 terminated.
Oct 02 19:42:18 compute-0 nova_compute[194781]: 2025-10-02 19:42:18.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:18 compute-0 nova_compute[194781]: 2025-10-02 19:42:18.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:18 compute-0 neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d[258426]: [NOTICE]   (258430) : haproxy version is 2.8.14-c23fe91
Oct 02 19:42:18 compute-0 neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d[258426]: [NOTICE]   (258430) : path to executable is /usr/sbin/haproxy
Oct 02 19:42:18 compute-0 neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d[258426]: [WARNING]  (258430) : Exiting Master process...
Oct 02 19:42:18 compute-0 neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d[258426]: [ALERT]    (258430) : Current worker (258432) exited with code 143 (Terminated)
Oct 02 19:42:18 compute-0 neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d[258426]: [WARNING]  (258430) : All workers exited. Exiting... (0)
Oct 02 19:42:18 compute-0 systemd[1]: libpod-7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81.scope: Deactivated successfully.
Oct 02 19:42:18 compute-0 conmon[258426]: conmon 7bc11e8d9956ea82869b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81.scope/container/memory.events
Oct 02 19:42:18 compute-0 podman[258689]: 2025-10-02 19:42:18.999611451 +0000 UTC m=+0.074685966 container died 7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.009 2 INFO nova.virt.libvirt.driver [-] [instance: 802f6003-69b3-4337-9652-641263d5864f] Instance destroyed successfully.
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.009 2 DEBUG nova.objects.instance [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lazy-loading 'resources' on Instance uuid 802f6003-69b3-4337-9652-641263d5864f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.027 2 DEBUG nova.virt.libvirt.vif [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:41:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-690272599',display_name='tempest-ServersTestManualDisk-server-690272599',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-690272599',id=8,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHqaX30hCJdLmbOfxgofGE14eqhapGuSxNbf8004P16cyFX+BDeR0BOc8E0L54R4mGxNJDr8fyZr+4oTbD/zyFtWB/zaHTHBsmExBW6jXPw9zFL+x3sOHyE0zXP3jIqk3Q==',key_name='tempest-keypair-460882991',keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:42:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2f53c7449f8e46fb84491ca16ecef449',ramdisk_id='',reservation_id='r-cg2fwbiz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1647165318',owner_user_name='tempest-ServersTestManualDisk-1647165318-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:42:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e11fb23793a2452993b49534ed668211',uuid=802f6003-69b3-4337-9652-641263d5864f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2758f8fe-aff6-42fb-9786-112689a5d452", "address": "fa:16:3e:72:4f:b3", "network": {"id": "8a351c78-806e-4438-a270-95c4b5a89d4d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1274704635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f53c7449f8e46fb84491ca16ecef449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2758f8fe-af", "ovs_interfaceid": "2758f8fe-aff6-42fb-9786-112689a5d452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.028 2 DEBUG nova.network.os_vif_util [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Converting VIF {"id": "2758f8fe-aff6-42fb-9786-112689a5d452", "address": "fa:16:3e:72:4f:b3", "network": {"id": "8a351c78-806e-4438-a270-95c4b5a89d4d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1274704635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f53c7449f8e46fb84491ca16ecef449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2758f8fe-af", "ovs_interfaceid": "2758f8fe-aff6-42fb-9786-112689a5d452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.028 2 DEBUG nova.network.os_vif_util [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:4f:b3,bridge_name='br-int',has_traffic_filtering=True,id=2758f8fe-aff6-42fb-9786-112689a5d452,network=Network(8a351c78-806e-4438-a270-95c4b5a89d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2758f8fe-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.030 2 DEBUG os_vif [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:4f:b3,bridge_name='br-int',has_traffic_filtering=True,id=2758f8fe-aff6-42fb-9786-112689a5d452,network=Network(8a351c78-806e-4438-a270-95c4b5a89d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2758f8fe-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:42:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81-userdata-shm.mount: Deactivated successfully.
Oct 02 19:42:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1ad5677af026dde445e001194fc8508e1661f6fbcc86f2e14b13eaee31164df-merged.mount: Deactivated successfully.
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.038 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2758f8fe-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:19 compute-0 podman[258689]: 2025-10-02 19:42:19.043967879 +0000 UTC m=+0.119042414 container cleanup 7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.047 2 INFO os_vif [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:4f:b3,bridge_name='br-int',has_traffic_filtering=True,id=2758f8fe-aff6-42fb-9786-112689a5d452,network=Network(8a351c78-806e-4438-a270-95c4b5a89d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2758f8fe-af')
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.048 2 INFO nova.virt.libvirt.driver [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Deleting instance files /var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f_del
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.049 2 INFO nova.virt.libvirt.driver [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Deletion of /var/lib/nova/instances/802f6003-69b3-4337-9652-641263d5864f_del complete
Oct 02 19:42:19 compute-0 systemd[1]: libpod-conmon-7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81.scope: Deactivated successfully.
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.105 2 DEBUG nova.compute.manager [req-f65ba795-56d9-4a33-93b0-4ef57ce511af req-33d06d1e-50ed-4be6-9a54-9307a7a2220e fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Received event network-vif-unplugged-2758f8fe-aff6-42fb-9786-112689a5d452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.105 2 DEBUG oslo_concurrency.lockutils [req-f65ba795-56d9-4a33-93b0-4ef57ce511af req-33d06d1e-50ed-4be6-9a54-9307a7a2220e fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "802f6003-69b3-4337-9652-641263d5864f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.105 2 DEBUG oslo_concurrency.lockutils [req-f65ba795-56d9-4a33-93b0-4ef57ce511af req-33d06d1e-50ed-4be6-9a54-9307a7a2220e fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.106 2 DEBUG oslo_concurrency.lockutils [req-f65ba795-56d9-4a33-93b0-4ef57ce511af req-33d06d1e-50ed-4be6-9a54-9307a7a2220e fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.106 2 DEBUG nova.compute.manager [req-f65ba795-56d9-4a33-93b0-4ef57ce511af req-33d06d1e-50ed-4be6-9a54-9307a7a2220e fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] No waiting events found dispatching network-vif-unplugged-2758f8fe-aff6-42fb-9786-112689a5d452 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.106 2 DEBUG nova.compute.manager [req-f65ba795-56d9-4a33-93b0-4ef57ce511af req-33d06d1e-50ed-4be6-9a54-9307a7a2220e fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Received event network-vif-unplugged-2758f8fe-aff6-42fb-9786-112689a5d452 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.107 2 INFO nova.compute.manager [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Took 0.38 seconds to destroy the instance on the hypervisor.
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.107 2 DEBUG oslo.service.loopingcall [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.108 2 DEBUG nova.compute.manager [-] [instance: 802f6003-69b3-4337-9652-641263d5864f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.108 2 DEBUG nova.network.neutron [-] [instance: 802f6003-69b3-4337-9652-641263d5864f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:42:19 compute-0 podman[258733]: 2025-10-02 19:42:19.133583381 +0000 UTC m=+0.061334401 container remove 7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 19:42:19 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:19.142 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[699d5856-11d0-47ff-b377-a98f9b70cc59]: (4, ('Thu Oct  2 07:42:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d (7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81)\n7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81\nThu Oct  2 07:42:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d (7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81)\n7bc11e8d9956ea82869b86a7b10c8db29f414d8eb3dae5522d0696f61ac72f81\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:19 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:19.145 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4da703-04fe-4bf9-8697-9e0331028767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:19 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:19.146 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a351c78-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:19 compute-0 kernel: tap8a351c78-80: left promiscuous mode
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:19 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:19.157 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[607eaeba-1b5f-46c5-9e14-a4b48e65262d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:19 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:19.195 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[e8cbe2c8-8a1d-4203-a84f-94cfbad8aaa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:19 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:19.197 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[74402ed9-53e8-40cc-b8fd-c873ece77f87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:19 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:19.217 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[103c726d-92fb-48fc-87b0-529b34fe155b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528267, 'reachable_time': 24498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258745, 'error': None, 'target': 'ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d8a351c78\x2d806e\x2d4438\x2da270\x2d95c4b5a89d4d.mount: Deactivated successfully.
Oct 02 19:42:19 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:19.220 106060 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8a351c78-806e-4438-a270-95c4b5a89d4d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 19:42:19 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:19.220 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[40ef4c1c-8e00-4a96-ae87-244d5a442d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.233 2 DEBUG nova.network.neutron [req-06e8c79e-bc51-4e30-9cff-673554e69c26 req-262f1725-2522-4b51-b298-f9c138f0e634 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Updated VIF entry in instance network info cache for port 2758f8fe-aff6-42fb-9786-112689a5d452. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.234 2 DEBUG nova.network.neutron [req-06e8c79e-bc51-4e30-9cff-673554e69c26 req-262f1725-2522-4b51-b298-f9c138f0e634 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Updating instance_info_cache with network_info: [{"id": "2758f8fe-aff6-42fb-9786-112689a5d452", "address": "fa:16:3e:72:4f:b3", "network": {"id": "8a351c78-806e-4438-a270-95c4b5a89d4d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1274704635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f53c7449f8e46fb84491ca16ecef449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2758f8fe-af", "ovs_interfaceid": "2758f8fe-aff6-42fb-9786-112689a5d452", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:42:19 compute-0 nova_compute[194781]: 2025-10-02 19:42:19.265 2 DEBUG oslo_concurrency.lockutils [req-06e8c79e-bc51-4e30-9cff-673554e69c26 req-262f1725-2522-4b51-b298-f9c138f0e634 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-802f6003-69b3-4337-9652-641263d5864f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:42:20 compute-0 podman[258747]: 2025-10-02 19:42:20.694268883 +0000 UTC m=+0.069894498 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:42:20 compute-0 podman[258748]: 2025-10-02 19:42:20.770036156 +0000 UTC m=+0.136941410 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 19:42:20 compute-0 nova_compute[194781]: 2025-10-02 19:42:20.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:21 compute-0 ovn_controller[97052]: 2025-10-02T19:42:21Z|00092|binding|INFO|Releasing lport bd80466a-6146-45a7-be35-ec332e1ee93c from this chassis (sb_readonly=0)
Oct 02 19:42:21 compute-0 ovn_controller[97052]: 2025-10-02T19:42:21Z|00093|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.408 2 DEBUG nova.compute.manager [req-b15b795b-ffcf-4666-a0c7-a41d5dabaac8 req-065ac765-ea4b-4f52-946d-a97d15b89171 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Received event network-vif-plugged-2758f8fe-aff6-42fb-9786-112689a5d452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.410 2 DEBUG oslo_concurrency.lockutils [req-b15b795b-ffcf-4666-a0c7-a41d5dabaac8 req-065ac765-ea4b-4f52-946d-a97d15b89171 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "802f6003-69b3-4337-9652-641263d5864f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.411 2 DEBUG oslo_concurrency.lockutils [req-b15b795b-ffcf-4666-a0c7-a41d5dabaac8 req-065ac765-ea4b-4f52-946d-a97d15b89171 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.412 2 DEBUG oslo_concurrency.lockutils [req-b15b795b-ffcf-4666-a0c7-a41d5dabaac8 req-065ac765-ea4b-4f52-946d-a97d15b89171 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.412 2 DEBUG nova.compute.manager [req-b15b795b-ffcf-4666-a0c7-a41d5dabaac8 req-065ac765-ea4b-4f52-946d-a97d15b89171 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] No waiting events found dispatching network-vif-plugged-2758f8fe-aff6-42fb-9786-112689a5d452 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.413 2 WARNING nova.compute.manager [req-b15b795b-ffcf-4666-a0c7-a41d5dabaac8 req-065ac765-ea4b-4f52-946d-a97d15b89171 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Received unexpected event network-vif-plugged-2758f8fe-aff6-42fb-9786-112689a5d452 for instance with vm_state active and task_state deleting.
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.660 2 DEBUG nova.network.neutron [-] [instance: 802f6003-69b3-4337-9652-641263d5864f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.677 2 INFO nova.compute.manager [-] [instance: 802f6003-69b3-4337-9652-641263d5864f] Took 2.57 seconds to deallocate network for instance.
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.745 2 DEBUG oslo_concurrency.lockutils [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.746 2 DEBUG oslo_concurrency.lockutils [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.863 2 DEBUG nova.compute.provider_tree [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.884 2 DEBUG nova.scheduler.client.report [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.911 2 DEBUG oslo_concurrency.lockutils [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:21 compute-0 nova_compute[194781]: 2025-10-02 19:42:21.937 2 INFO nova.scheduler.client.report [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Deleted allocations for instance 802f6003-69b3-4337-9652-641263d5864f
Oct 02 19:42:22 compute-0 nova_compute[194781]: 2025-10-02 19:42:22.032 2 DEBUG oslo_concurrency.lockutils [None req-1b1b28b5-1e9e-4fde-8299-ba6413e03625 e11fb23793a2452993b49534ed668211 2f53c7449f8e46fb84491ca16ecef449 - - default default] Lock "802f6003-69b3-4337-9652-641263d5864f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:23 compute-0 nova_compute[194781]: 2025-10-02 19:42:23.701 2 DEBUG nova.compute.manager [req-07b63a49-9127-4290-9d6b-8d1030dbbf31 req-fdd63a92-de82-4746-aa19-a7c7a586bb4a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 802f6003-69b3-4337-9652-641263d5864f] Received event network-vif-deleted-2758f8fe-aff6-42fb-9786-112689a5d452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:23 compute-0 nova_compute[194781]: 2025-10-02 19:42:23.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:23 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:23.972 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:42:23 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:23.973 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:42:24 compute-0 nova_compute[194781]: 2025-10-02 19:42:24.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:24 compute-0 nova_compute[194781]: 2025-10-02 19:42:24.986 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759434129.984693, 8c3516d0-e1db-4043-8054-0efaf55f8158 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:42:24 compute-0 nova_compute[194781]: 2025-10-02 19:42:24.987 2 INFO nova.compute.manager [-] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] VM Stopped (Lifecycle Event)
Oct 02 19:42:25 compute-0 nova_compute[194781]: 2025-10-02 19:42:25.012 2 DEBUG nova.compute.manager [None req-7218563d-f447-4cf6-9bc7-2dadc14d9842 - - - - - -] [instance: 8c3516d0-e1db-4043-8054-0efaf55f8158] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:25 compute-0 nova_compute[194781]: 2025-10-02 19:42:25.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:27 compute-0 podman[258787]: 2025-10-02 19:42:27.739892869 +0000 UTC m=+0.099586757 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:42:28 compute-0 nova_compute[194781]: 2025-10-02 19:42:28.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:28 compute-0 nova_compute[194781]: 2025-10-02 19:42:28.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:29 compute-0 nova_compute[194781]: 2025-10-02 19:42:29.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:29 compute-0 nova_compute[194781]: 2025-10-02 19:42:29.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:29 compute-0 podman[209015]: time="2025-10-02T19:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:42:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:42:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5691 "" "Go-http-client/1.1"
Oct 02 19:42:30 compute-0 nova_compute[194781]: 2025-10-02 19:42:30.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:31 compute-0 openstack_network_exporter[211160]: ERROR   19:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:42:31 compute-0 openstack_network_exporter[211160]: ERROR   19:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:42:31 compute-0 openstack_network_exporter[211160]: ERROR   19:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:42:31 compute-0 openstack_network_exporter[211160]: ERROR   19:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:42:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:42:31 compute-0 openstack_network_exporter[211160]: ERROR   19:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:42:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:42:32 compute-0 nova_compute[194781]: 2025-10-02 19:42:32.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:32 compute-0 nova_compute[194781]: 2025-10-02 19:42:32.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:42:33 compute-0 nova_compute[194781]: 2025-10-02 19:42:33.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:33 compute-0 ovn_controller[97052]: 2025-10-02T19:42:33Z|00094|binding|INFO|Releasing lport bd80466a-6146-45a7-be35-ec332e1ee93c from this chassis (sb_readonly=0)
Oct 02 19:42:33 compute-0 ovn_controller[97052]: 2025-10-02T19:42:33Z|00095|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:42:33 compute-0 nova_compute[194781]: 2025-10-02 19:42:33.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:33 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:33.974 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:34 compute-0 nova_compute[194781]: 2025-10-02 19:42:34.005 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759434139.0036163, 802f6003-69b3-4337-9652-641263d5864f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:42:34 compute-0 nova_compute[194781]: 2025-10-02 19:42:34.005 2 INFO nova.compute.manager [-] [instance: 802f6003-69b3-4337-9652-641263d5864f] VM Stopped (Lifecycle Event)
Oct 02 19:42:34 compute-0 nova_compute[194781]: 2025-10-02 19:42:34.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:34 compute-0 nova_compute[194781]: 2025-10-02 19:42:34.044 2 DEBUG nova.compute.manager [None req-1e7cbe7d-5db5-4172-b306-7f2c489fc337 - - - - - -] [instance: 802f6003-69b3-4337-9652-641263d5864f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:34 compute-0 nova_compute[194781]: 2025-10-02 19:42:34.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:34 compute-0 nova_compute[194781]: 2025-10-02 19:42:34.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:34 compute-0 nova_compute[194781]: 2025-10-02 19:42:34.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:35 compute-0 unix_chkpwd[258813]: password check failed for user (root)
Oct 02 19:42:35 compute-0 sshd-session[258811]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20  user=root
Oct 02 19:42:35 compute-0 nova_compute[194781]: 2025-10-02 19:42:35.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:36 compute-0 nova_compute[194781]: 2025-10-02 19:42:36.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:36 compute-0 nova_compute[194781]: 2025-10-02 19:42:36.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:36 compute-0 podman[258815]: 2025-10-02 19:42:36.75160203 +0000 UTC m=+0.106654565 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Oct 02 19:42:36 compute-0 podman[258814]: 2025-10-02 19:42:36.76515319 +0000 UTC m=+0.130598652 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.063 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.063 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.064 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.064 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.206 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.273 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.274 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.333 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.346 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.444 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.446 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.543 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.546 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.627 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.632 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:37 compute-0 sshd-session[258811]: Failed password for root from 193.46.255.20 port 15986 ssh2
Oct 02 19:42:37 compute-0 nova_compute[194781]: 2025-10-02 19:42:37.708 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:38 compute-0 nova_compute[194781]: 2025-10-02 19:42:38.196 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:42:38 compute-0 nova_compute[194781]: 2025-10-02 19:42:38.198 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4924MB free_disk=72.44460678100586GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:42:38 compute-0 nova_compute[194781]: 2025-10-02 19:42:38.199 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:38 compute-0 nova_compute[194781]: 2025-10-02 19:42:38.200 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:38 compute-0 nova_compute[194781]: 2025-10-02 19:42:38.352 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:42:38 compute-0 nova_compute[194781]: 2025-10-02 19:42:38.353 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 6eada58a-d077-43e5-ab40-dd45abbe38f3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:42:38 compute-0 nova_compute[194781]: 2025-10-02 19:42:38.354 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:42:38 compute-0 nova_compute[194781]: 2025-10-02 19:42:38.355 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:42:38 compute-0 nova_compute[194781]: 2025-10-02 19:42:38.442 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:42:38 compute-0 nova_compute[194781]: 2025-10-02 19:42:38.469 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:42:38 compute-0 unix_chkpwd[258872]: password check failed for user (root)
Oct 02 19:42:38 compute-0 nova_compute[194781]: 2025-10-02 19:42:38.503 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:42:38 compute-0 nova_compute[194781]: 2025-10-02 19:42:38.504 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:39 compute-0 nova_compute[194781]: 2025-10-02 19:42:39.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:39 compute-0 nova_compute[194781]: 2025-10-02 19:42:39.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:40 compute-0 nova_compute[194781]: 2025-10-02 19:42:40.501 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:40 compute-0 sshd-session[258811]: Failed password for root from 193.46.255.20 port 15986 ssh2
Oct 02 19:42:40 compute-0 podman[258884]: 2025-10-02 19:42:40.768028539 +0000 UTC m=+0.115910741 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct 02 19:42:40 compute-0 podman[258882]: 2025-10-02 19:42:40.772025166 +0000 UTC m=+0.123658848 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 19:42:40 compute-0 podman[258883]: 2025-10-02 19:42:40.793923578 +0000 UTC m=+0.138994835 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, io.buildah.version=1.29.0, container_name=kepler, architecture=x86_64, version=9.4)
Oct 02 19:42:40 compute-0 nova_compute[194781]: 2025-10-02 19:42:40.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:40 compute-0 nova_compute[194781]: 2025-10-02 19:42:40.950 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquiring lock "fd018206-5b5d-4759-8481-a7dd68c01a2e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:40 compute-0 nova_compute[194781]: 2025-10-02 19:42:40.951 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:40 compute-0 nova_compute[194781]: 2025-10-02 19:42:40.959 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Acquiring lock "3aad9658-5f65-4eed-8b09-f453505c2d61" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:40 compute-0 nova_compute[194781]: 2025-10-02 19:42:40.960 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "3aad9658-5f65-4eed-8b09-f453505c2d61" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:40 compute-0 nova_compute[194781]: 2025-10-02 19:42:40.967 2 DEBUG nova.compute.manager [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:42:40 compute-0 nova_compute[194781]: 2025-10-02 19:42:40.982 2 DEBUG nova.compute.manager [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.039 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.039 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.049 2 DEBUG nova.virt.hardware [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.049 2 INFO nova.compute.claims [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.075 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.227 2 DEBUG nova.compute.provider_tree [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.249 2 DEBUG nova.scheduler.client.report [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.270 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.271 2 DEBUG nova.compute.manager [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.274 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.283 2 DEBUG nova.virt.hardware [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.284 2 INFO nova.compute.claims [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.338 2 DEBUG nova.compute.manager [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.340 2 DEBUG nova.network.neutron [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.364 2 INFO nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.403 2 DEBUG nova.compute.manager [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.480 2 DEBUG nova.compute.provider_tree [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.490 2 DEBUG nova.compute.manager [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.493 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.494 2 INFO nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Creating image(s)
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.496 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquiring lock "/var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.497 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "/var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.498 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "/var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.511 2 DEBUG nova.scheduler.client.report [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.515 2 DEBUG oslo_concurrency.processutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.542 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.544 2 DEBUG nova.compute.manager [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.597 2 DEBUG nova.compute.manager [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.600 2 DEBUG nova.network.neutron [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.615 2 DEBUG oslo_concurrency.processutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.616 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquiring lock "a9843d922d50b317c389e448cbaaf7849a9d0409" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.616 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.629 2 DEBUG oslo_concurrency.processutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:41 compute-0 unix_chkpwd[258940]: password check failed for user (root)
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.650 2 INFO nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.674 2 DEBUG nova.compute.manager [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.700 2 DEBUG oslo_concurrency.processutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.701 2 DEBUG oslo_concurrency.processutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.747 2 DEBUG oslo_concurrency.processutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.748 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.749 2 DEBUG oslo_concurrency.processutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.772 2 DEBUG nova.compute.manager [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.774 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.775 2 INFO nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Creating image(s)
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.775 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Acquiring lock "/var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.776 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "/var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.776 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "/var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.788 2 DEBUG oslo_concurrency.processutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.830 2 DEBUG oslo_concurrency.processutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.831 2 DEBUG nova.virt.disk.api [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Checking if we can resize image /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.832 2 DEBUG oslo_concurrency.processutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:41 compute-0 ovn_controller[97052]: 2025-10-02T19:42:41Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:84:0f 10.100.0.3
Oct 02 19:42:41 compute-0 ovn_controller[97052]: 2025-10-02T19:42:41Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:84:0f 10.100.0.3
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.876 2 DEBUG oslo_concurrency.processutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.877 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Acquiring lock "a9843d922d50b317c389e448cbaaf7849a9d0409" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.879 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.895 2 DEBUG oslo_concurrency.processutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.931 2 DEBUG oslo_concurrency.processutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.933 2 DEBUG nova.virt.disk.api [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Cannot resize image /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.933 2 DEBUG nova.objects.instance [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lazy-loading 'migration_context' on Instance uuid fd018206-5b5d-4759-8481-a7dd68c01a2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.951 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.951 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Ensure instance console log exists: /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.952 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.953 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.953 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.956 2 DEBUG oslo_concurrency.processutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.957 2 DEBUG oslo_concurrency.processutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:41 compute-0 nova_compute[194781]: 2025-10-02 19:42:41.986 2 DEBUG nova.policy [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5d286f2c6fa49b2bded7a673c5a9d52', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfa01cf9d3eb4388bef0e350af472762', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.011 2 DEBUG oslo_concurrency.processutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/disk 1073741824" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.011 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.012 2 DEBUG oslo_concurrency.processutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.077 2 DEBUG oslo_concurrency.processutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.079 2 DEBUG nova.virt.disk.api [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Checking if we can resize image /var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.080 2 DEBUG oslo_concurrency.processutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.144 2 DEBUG oslo_concurrency.processutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.146 2 DEBUG nova.virt.disk.api [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Cannot resize image /var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.147 2 DEBUG nova.objects.instance [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lazy-loading 'migration_context' on Instance uuid 3aad9658-5f65-4eed-8b09-f453505c2d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.164 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.165 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Ensure instance console log exists: /var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.166 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.167 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.167 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:42 compute-0 nova_compute[194781]: 2025-10-02 19:42:42.236 2 DEBUG nova.policy [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eefe39d7484540c99c7e4ac98c03cf24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a776ef3132894c27a8bfaa390763de2a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 19:42:43 compute-0 nova_compute[194781]: 2025-10-02 19:42:43.028 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:43 compute-0 nova_compute[194781]: 2025-10-02 19:42:43.374 2 DEBUG nova.network.neutron [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Successfully created port: 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 19:42:43 compute-0 nova_compute[194781]: 2025-10-02 19:42:43.408 2 DEBUG nova.network.neutron [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Successfully created port: e5040e37-a376-40c4-b891-5e45c03cb9d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 19:42:43 compute-0 sshd-session[258811]: Failed password for root from 193.46.255.20 port 15986 ssh2
Oct 02 19:42:44 compute-0 nova_compute[194781]: 2025-10-02 19:42:44.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:44 compute-0 nova_compute[194781]: 2025-10-02 19:42:44.514 2 DEBUG nova.network.neutron [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Successfully updated port: e5040e37-a376-40c4-b891-5e45c03cb9d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:42:44 compute-0 nova_compute[194781]: 2025-10-02 19:42:44.540 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Acquiring lock "refresh_cache-3aad9658-5f65-4eed-8b09-f453505c2d61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:42:44 compute-0 nova_compute[194781]: 2025-10-02 19:42:44.541 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Acquired lock "refresh_cache-3aad9658-5f65-4eed-8b09-f453505c2d61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:42:44 compute-0 nova_compute[194781]: 2025-10-02 19:42:44.541 2 DEBUG nova.network.neutron [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:42:44 compute-0 nova_compute[194781]: 2025-10-02 19:42:44.638 2 DEBUG nova.network.neutron [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Successfully updated port: 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:42:44 compute-0 nova_compute[194781]: 2025-10-02 19:42:44.658 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquiring lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:42:44 compute-0 nova_compute[194781]: 2025-10-02 19:42:44.658 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquired lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:42:44 compute-0 nova_compute[194781]: 2025-10-02 19:42:44.658 2 DEBUG nova.network.neutron [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:42:44 compute-0 sshd-session[258811]: Received disconnect from 193.46.255.20 port 15986:11:  [preauth]
Oct 02 19:42:44 compute-0 sshd-session[258811]: Disconnected from authenticating user root 193.46.255.20 port 15986 [preauth]
Oct 02 19:42:44 compute-0 sshd-session[258811]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20  user=root
Oct 02 19:42:44 compute-0 nova_compute[194781]: 2025-10-02 19:42:44.821 2 DEBUG nova.compute.manager [req-d1472262-8ddb-48dc-ac78-a973462b544a req-40fcd410-ad8c-46eb-854a-218fb38e0471 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Received event network-changed-e5040e37-a376-40c4-b891-5e45c03cb9d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:44 compute-0 nova_compute[194781]: 2025-10-02 19:42:44.821 2 DEBUG nova.compute.manager [req-d1472262-8ddb-48dc-ac78-a973462b544a req-40fcd410-ad8c-46eb-854a-218fb38e0471 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Refreshing instance network info cache due to event network-changed-e5040e37-a376-40c4-b891-5e45c03cb9d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:42:44 compute-0 nova_compute[194781]: 2025-10-02 19:42:44.822 2 DEBUG oslo_concurrency.lockutils [req-d1472262-8ddb-48dc-ac78-a973462b544a req-40fcd410-ad8c-46eb-854a-218fb38e0471 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-3aad9658-5f65-4eed-8b09-f453505c2d61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.056 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.057 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.064 2 DEBUG nova.compute.manager [req-e9961982-491e-4e99-a64b-56b5020758d9 req-57f3fe97-f8ab-44ea-85b9-14c9edc9baf2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Received event network-changed-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.064 2 DEBUG nova.compute.manager [req-e9961982-491e-4e99-a64b-56b5020758d9 req-57f3fe97-f8ab-44ea-85b9-14c9edc9baf2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Refreshing instance network info cache due to event network-changed-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.065 2 DEBUG oslo_concurrency.lockutils [req-e9961982-491e-4e99-a64b-56b5020758d9 req-57f3fe97-f8ab-44ea-85b9-14c9edc9baf2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.182 2 DEBUG nova.network.neutron [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.359 2 DEBUG nova.network.neutron [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.445 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.445 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.446 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.446 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:42:45 compute-0 unix_chkpwd[258970]: password check failed for user (root)
Oct 02 19:42:45 compute-0 sshd-session[258968]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20  user=root
Oct 02 19:42:45 compute-0 nova_compute[194781]: 2025-10-02 19:42:45.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:46 compute-0 nova_compute[194781]: 2025-10-02 19:42:46.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:47 compute-0 sshd-session[258968]: Failed password for root from 193.46.255.20 port 59018 ssh2
Oct 02 19:42:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:47.488 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:47.489 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:47.490 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:47 compute-0 podman[258971]: 2025-10-02 19:42:47.712665051 +0000 UTC m=+0.083397457 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:42:47 compute-0 podman[258972]: 2025-10-02 19:42:47.73483251 +0000 UTC m=+0.100675907 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible)
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.114 2 DEBUG nova.network.neutron [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Updating instance_info_cache with network_info: [{"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.137 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Releasing lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.138 2 DEBUG nova.compute.manager [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Instance network_info: |[{"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.138 2 DEBUG oslo_concurrency.lockutils [req-e9961982-491e-4e99-a64b-56b5020758d9 req-57f3fe97-f8ab-44ea-85b9-14c9edc9baf2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.139 2 DEBUG nova.network.neutron [req-e9961982-491e-4e99-a64b-56b5020758d9 req-57f3fe97-f8ab-44ea-85b9-14c9edc9baf2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Refreshing network info cache for port 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.142 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Start _get_guest_xml network_info=[{"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': 'c191839f-7364-41ce-80c8-eff8077fc750'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.150 2 WARNING nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.157 2 DEBUG nova.virt.libvirt.host [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.158 2 DEBUG nova.virt.libvirt.host [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.164 2 DEBUG nova.virt.libvirt.host [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.165 2 DEBUG nova.virt.libvirt.host [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.165 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.166 2 DEBUG nova.virt.hardware [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:40:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7ab5ea96-81dd-4496-8a1f-012f7d2c53c5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.166 2 DEBUG nova.virt.hardware [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.167 2 DEBUG nova.virt.hardware [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.167 2 DEBUG nova.virt.hardware [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.167 2 DEBUG nova.virt.hardware [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.168 2 DEBUG nova.virt.hardware [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.168 2 DEBUG nova.virt.hardware [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.168 2 DEBUG nova.virt.hardware [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.169 2 DEBUG nova.virt.hardware [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.169 2 DEBUG nova.virt.hardware [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.169 2 DEBUG nova.virt.hardware [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.173 2 DEBUG nova.virt.libvirt.vif [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:42:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1258340398',display_name='tempest-AttachInterfacesUnderV243Test-server-1258340398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1258340398',id=10,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAi8r74NAihki+Wu7/WVf2EpMRRpAad1pvOJ9n7X7dtUA3wA81PPkz4CDNLV0PKBV+vfeT6ZKEwNa2p45q2P6JovkirP8zmol2nXt3bF1GLnxW946byUaEp1P161J+2sXQ==',key_name='tempest-keypair-1402289596',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfa01cf9d3eb4388bef0e350af472762',ramdisk_id='',reservation_id='r-8g54kq9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1074896381',owner_user_name='tempest-AttachInterfacesUnderV243Test-1074896381-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:42:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5d286f2c6fa49b2bded7a673c5a9d52',uuid=fd018206-5b5d-4759-8481-a7dd68c01a2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.173 2 DEBUG nova.network.os_vif_util [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Converting VIF {"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.174 2 DEBUG nova.network.os_vif_util [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:87:ef,bridge_name='br-int',has_traffic_filtering=True,id=93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66,network=Network(c07a9d85-90af-47c3-a2ed-3103aaadb7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93a8e2fd-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.175 2 DEBUG nova.objects.instance [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lazy-loading 'pci_devices' on Instance uuid fd018206-5b5d-4759-8481-a7dd68c01a2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.189 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <uuid>fd018206-5b5d-4759-8481-a7dd68c01a2e</uuid>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <name>instance-0000000a</name>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <memory>131072</memory>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1258340398</nova:name>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:42:48</nova:creationTime>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <nova:flavor name="m1.nano">
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:memory>128</nova:memory>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:user uuid="c5d286f2c6fa49b2bded7a673c5a9d52">tempest-AttachInterfacesUnderV243Test-1074896381-project-member</nova:user>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:project uuid="bfa01cf9d3eb4388bef0e350af472762">tempest-AttachInterfacesUnderV243Test-1074896381</nova:project>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="c191839f-7364-41ce-80c8-eff8077fc750"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:port uuid="93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66">
Oct 02 19:42:48 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <system>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <entry name="serial">fd018206-5b5d-4759-8481-a7dd68c01a2e</entry>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <entry name="uuid">fd018206-5b5d-4759-8481-a7dd68c01a2e</entry>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </system>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <os>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   </os>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <features>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   </features>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.config"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:81:87:ef"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <target dev="tap93a8e2fd-ae"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/console.log" append="off"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <video>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </video>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:42:48 compute-0 nova_compute[194781]: </domain>
Oct 02 19:42:48 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.190 2 DEBUG nova.compute.manager [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Preparing to wait for external event network-vif-plugged-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.191 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquiring lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.191 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.191 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.192 2 DEBUG nova.virt.libvirt.vif [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:42:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1258340398',display_name='tempest-AttachInterfacesUnderV243Test-server-1258340398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1258340398',id=10,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAi8r74NAihki+Wu7/WVf2EpMRRpAad1pvOJ9n7X7dtUA3wA81PPkz4CDNLV0PKBV+vfeT6ZKEwNa2p45q2P6JovkirP8zmol2nXt3bF1GLnxW946byUaEp1P161J+2sXQ==',key_name='tempest-keypair-1402289596',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfa01cf9d3eb4388bef0e350af472762',ramdisk_id='',reservation_id='r-8g54kq9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1074896381',owner_user_name='tempest-AttachInterfacesUnderV243Test-1074896381-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:42:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5d286f2c6fa49b2bded7a673c5a9d52',uuid=fd018206-5b5d-4759-8481-a7dd68c01a2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.193 2 DEBUG nova.network.os_vif_util [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Converting VIF {"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.194 2 DEBUG nova.network.os_vif_util [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:87:ef,bridge_name='br-int',has_traffic_filtering=True,id=93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66,network=Network(c07a9d85-90af-47c3-a2ed-3103aaadb7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93a8e2fd-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.194 2 DEBUG os_vif [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:87:ef,bridge_name='br-int',has_traffic_filtering=True,id=93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66,network=Network(c07a9d85-90af-47c3-a2ed-3103aaadb7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93a8e2fd-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.195 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.195 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.201 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93a8e2fd-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.202 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap93a8e2fd-ae, col_values=(('external_ids', {'iface-id': '93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:87:ef', 'vm-uuid': 'fd018206-5b5d-4759-8481-a7dd68c01a2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:48 compute-0 NetworkManager[52324]: <info>  [1759434168.2072] manager: (tap93a8e2fd-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.219 2 INFO os_vif [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:87:ef,bridge_name='br-int',has_traffic_filtering=True,id=93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66,network=Network(c07a9d85-90af-47c3-a2ed-3103aaadb7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93a8e2fd-ae')
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.275 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.276 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.276 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] No VIF found with MAC fa:16:3e:81:87:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.277 2 INFO nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Using config drive
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.372 2 DEBUG nova.network.neutron [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Updating instance_info_cache with network_info: [{"id": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "address": "fa:16:3e:b8:b9:c8", "network": {"id": "b443ed89-b341-42c7-9f7d-f5f0acb8cd4d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1498077201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a776ef3132894c27a8bfaa390763de2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5040e37-a3", "ovs_interfaceid": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.408 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Releasing lock "refresh_cache-3aad9658-5f65-4eed-8b09-f453505c2d61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.409 2 DEBUG nova.compute.manager [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Instance network_info: |[{"id": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "address": "fa:16:3e:b8:b9:c8", "network": {"id": "b443ed89-b341-42c7-9f7d-f5f0acb8cd4d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1498077201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a776ef3132894c27a8bfaa390763de2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5040e37-a3", "ovs_interfaceid": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.410 2 DEBUG oslo_concurrency.lockutils [req-d1472262-8ddb-48dc-ac78-a973462b544a req-40fcd410-ad8c-46eb-854a-218fb38e0471 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-3aad9658-5f65-4eed-8b09-f453505c2d61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.411 2 DEBUG nova.network.neutron [req-d1472262-8ddb-48dc-ac78-a973462b544a req-40fcd410-ad8c-46eb-854a-218fb38e0471 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Refreshing network info cache for port e5040e37-a376-40c4-b891-5e45c03cb9d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.417 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Start _get_guest_xml network_info=[{"id": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "address": "fa:16:3e:b8:b9:c8", "network": {"id": "b443ed89-b341-42c7-9f7d-f5f0acb8cd4d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1498077201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a776ef3132894c27a8bfaa390763de2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5040e37-a3", "ovs_interfaceid": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': 'c191839f-7364-41ce-80c8-eff8077fc750'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.428 2 WARNING nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.438 2 DEBUG nova.virt.libvirt.host [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.439 2 DEBUG nova.virt.libvirt.host [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.455 2 DEBUG nova.virt.libvirt.host [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.457 2 DEBUG nova.virt.libvirt.host [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.457 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.458 2 DEBUG nova.virt.hardware [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:40:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7ab5ea96-81dd-4496-8a1f-012f7d2c53c5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.459 2 DEBUG nova.virt.hardware [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.460 2 DEBUG nova.virt.hardware [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.461 2 DEBUG nova.virt.hardware [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.461 2 DEBUG nova.virt.hardware [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.462 2 DEBUG nova.virt.hardware [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.463 2 DEBUG nova.virt.hardware [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.463 2 DEBUG nova.virt.hardware [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.464 2 DEBUG nova.virt.hardware [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.465 2 DEBUG nova.virt.hardware [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.465 2 DEBUG nova.virt.hardware [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.471 2 DEBUG nova.virt.libvirt.vif [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:42:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1810216507',display_name='tempest-ServerAddressesTestJSON-server-1810216507',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1810216507',id=9,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a776ef3132894c27a8bfaa390763de2a',ramdisk_id='',reservation_id='r-yqbfw7nn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1342472581',owner_user_name='tempest-ServerAddressesTestJSON-1342472581-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:42:41Z,user_data=None,user_id='eefe39d7484540c99c7e4ac98c03cf24',uuid=3aad9658-5f65-4eed-8b09-f453505c2d61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "address": "fa:16:3e:b8:b9:c8", "network": {"id": "b443ed89-b341-42c7-9f7d-f5f0acb8cd4d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1498077201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a776ef3132894c27a8bfaa390763de2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5040e37-a3", "ovs_interfaceid": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.472 2 DEBUG nova.network.os_vif_util [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Converting VIF {"id": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "address": "fa:16:3e:b8:b9:c8", "network": {"id": "b443ed89-b341-42c7-9f7d-f5f0acb8cd4d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1498077201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a776ef3132894c27a8bfaa390763de2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5040e37-a3", "ovs_interfaceid": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.473 2 DEBUG nova.network.os_vif_util [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b9:c8,bridge_name='br-int',has_traffic_filtering=True,id=e5040e37-a376-40c4-b891-5e45c03cb9d4,network=Network(b443ed89-b341-42c7-9f7d-f5f0acb8cd4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5040e37-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.474 2 DEBUG nova.objects.instance [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lazy-loading 'pci_devices' on Instance uuid 3aad9658-5f65-4eed-8b09-f453505c2d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.497 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <uuid>3aad9658-5f65-4eed-8b09-f453505c2d61</uuid>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <name>instance-00000009</name>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <memory>131072</memory>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <nova:name>tempest-ServerAddressesTestJSON-server-1810216507</nova:name>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:42:48</nova:creationTime>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <nova:flavor name="m1.nano">
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:memory>128</nova:memory>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:user uuid="eefe39d7484540c99c7e4ac98c03cf24">tempest-ServerAddressesTestJSON-1342472581-project-member</nova:user>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:project uuid="a776ef3132894c27a8bfaa390763de2a">tempest-ServerAddressesTestJSON-1342472581</nova:project>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="c191839f-7364-41ce-80c8-eff8077fc750"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         <nova:port uuid="e5040e37-a376-40c4-b891-5e45c03cb9d4">
Oct 02 19:42:48 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <system>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <entry name="serial">3aad9658-5f65-4eed-8b09-f453505c2d61</entry>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <entry name="uuid">3aad9658-5f65-4eed-8b09-f453505c2d61</entry>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </system>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <os>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   </os>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <features>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   </features>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/disk"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/disk.config"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:b8:b9:c8"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <target dev="tape5040e37-a3"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/console.log" append="off"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <video>
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </video>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:42:48 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:42:48 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:42:48 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:42:48 compute-0 nova_compute[194781]: </domain>
Oct 02 19:42:48 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.499 2 DEBUG nova.compute.manager [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Preparing to wait for external event network-vif-plugged-e5040e37-a376-40c4-b891-5e45c03cb9d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.500 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Acquiring lock "3aad9658-5f65-4eed-8b09-f453505c2d61-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.500 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "3aad9658-5f65-4eed-8b09-f453505c2d61-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.501 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "3aad9658-5f65-4eed-8b09-f453505c2d61-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.502 2 DEBUG nova.virt.libvirt.vif [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:42:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1810216507',display_name='tempest-ServerAddressesTestJSON-server-1810216507',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1810216507',id=9,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a776ef3132894c27a8bfaa390763de2a',ramdisk_id='',reservation_id='r-yqbfw7nn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1342472581',owner_user_name='tempest-ServerAddressesTestJSON-1342472581-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:42:41Z,user_data=None,user_id='eefe39d7484540c99c7e4ac98c03cf24',uuid=3aad9658-5f65-4eed-8b09-f453505c2d61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "address": "fa:16:3e:b8:b9:c8", "network": {"id": "b443ed89-b341-42c7-9f7d-f5f0acb8cd4d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1498077201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a776ef3132894c27a8bfaa390763de2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5040e37-a3", "ovs_interfaceid": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.503 2 DEBUG nova.network.os_vif_util [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Converting VIF {"id": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "address": "fa:16:3e:b8:b9:c8", "network": {"id": "b443ed89-b341-42c7-9f7d-f5f0acb8cd4d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1498077201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a776ef3132894c27a8bfaa390763de2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5040e37-a3", "ovs_interfaceid": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.505 2 DEBUG nova.network.os_vif_util [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b9:c8,bridge_name='br-int',has_traffic_filtering=True,id=e5040e37-a376-40c4-b891-5e45c03cb9d4,network=Network(b443ed89-b341-42c7-9f7d-f5f0acb8cd4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5040e37-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.505 2 DEBUG os_vif [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b9:c8,bridge_name='br-int',has_traffic_filtering=True,id=e5040e37-a376-40c4-b891-5e45c03cb9d4,network=Network(b443ed89-b341-42c7-9f7d-f5f0acb8cd4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5040e37-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.507 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.508 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5040e37-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape5040e37-a3, col_values=(('external_ids', {'iface-id': 'e5040e37-a376-40c4-b891-5e45c03cb9d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:b9:c8', 'vm-uuid': '3aad9658-5f65-4eed-8b09-f453505c2d61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:48 compute-0 NetworkManager[52324]: <info>  [1759434168.5183] manager: (tape5040e37-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.536 2 INFO os_vif [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b9:c8,bridge_name='br-int',has_traffic_filtering=True,id=e5040e37-a376-40c4-b891-5e45c03cb9d4,network=Network(b443ed89-b341-42c7-9f7d-f5f0acb8cd4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5040e37-a3')
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.595 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.596 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.596 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] No VIF found with MAC fa:16:3e:b8:b9:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.596 2 INFO nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Using config drive
Oct 02 19:42:48 compute-0 unix_chkpwd[259015]: password check failed for user (root)
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.929 2 INFO nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Creating config drive at /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.config
Oct 02 19:42:48 compute-0 nova_compute[194781]: 2025-10-02 19:42:48.937 2 DEBUG oslo_concurrency.processutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpup57o7p9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.063 2 DEBUG oslo_concurrency.processutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpup57o7p9" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:49 compute-0 kernel: tap93a8e2fd-ae: entered promiscuous mode
Oct 02 19:42:49 compute-0 NetworkManager[52324]: <info>  [1759434169.1655] manager: (tap93a8e2fd-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Oct 02 19:42:49 compute-0 ovn_controller[97052]: 2025-10-02T19:42:49Z|00096|binding|INFO|Claiming lport 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 for this chassis.
Oct 02 19:42:49 compute-0 ovn_controller[97052]: 2025-10-02T19:42:49Z|00097|binding|INFO|93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66: Claiming fa:16:3e:81:87:ef 10.100.0.12
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.182 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:87:ef 10.100.0.12'], port_security=['fa:16:3e:81:87:ef 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fd018206-5b5d-4759-8481-a7dd68c01a2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c07a9d85-90af-47c3-a2ed-3103aaadb7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfa01cf9d3eb4388bef0e350af472762', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a5e2c76-a0b6-479b-a7e1-ac8a5e2ef609', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac8c83c5-af49-454a-8773-e23c66675f28, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.184 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 in datapath c07a9d85-90af-47c3-a2ed-3103aaadb7da bound to our chassis
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.187 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c07a9d85-90af-47c3-a2ed-3103aaadb7da
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.199 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[35796043-61c9-4767-ab38-576e8270d713]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.200 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc07a9d85-91 in ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.203 246899 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc07a9d85-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.204 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[9c7c0da3-a256-4ac7-8690-9ac5dfccf5fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 ovn_controller[97052]: 2025-10-02T19:42:49Z|00098|binding|INFO|Setting lport 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 ovn-installed in OVS
Oct 02 19:42:49 compute-0 ovn_controller[97052]: 2025-10-02T19:42:49Z|00099|binding|INFO|Setting lport 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 up in Southbound
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.206 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[986d5764-6b50-4ef1-ae75-e5a37f9650ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:49 compute-0 systemd-udevd[259036]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.227 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4678a8-b9ed-427e-aa2b-486e86eaa466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 NetworkManager[52324]: <info>  [1759434169.2344] device (tap93a8e2fd-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:42:49 compute-0 NetworkManager[52324]: <info>  [1759434169.2350] device (tap93a8e2fd-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:42:49 compute-0 systemd-machined[154795]: New machine qemu-9-instance-0000000a.
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.266 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2c078a-7ee4-436d-91ac-fd72ee52f96f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-0000000a.
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.311 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[03ea9522-9f6a-4fe7-ac75-58e10027bb13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.314 2 INFO nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Creating config drive at /var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/disk.config
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.323 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[0edc37f4-5080-4fd1-85db-741535a6df06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 NetworkManager[52324]: <info>  [1759434169.3260] manager: (tapc07a9d85-90): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.324 2 DEBUG oslo_concurrency.processutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpshy3o0xs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.369 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[27198ef3-523e-4d8c-95ec-9835e37ac40d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.373 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[63e9cc0e-be4b-4390-b4e8-4c31c855a9ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.378 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.398 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.399 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:42:49 compute-0 NetworkManager[52324]: <info>  [1759434169.4134] device (tapc07a9d85-90): carrier: link connected
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.423 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[37a3fbd8-c8e9-4291-aec9-f32b827b8c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.446 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[62aaca1f-1899-4bdd-aedd-d2fd63c4cd91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc07a9d85-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:86:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533003, 'reachable_time': 41237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259075, 'error': None, 'target': 'ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.466 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[90d69128-b748-48f2-b174-649b87cbcb53]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:8600'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533003, 'tstamp': 533003}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259076, 'error': None, 'target': 'ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.468 2 DEBUG oslo_concurrency.processutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpshy3o0xs" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.484 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ec852a-dbc4-4a5d-90d3-32e68b7c50b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc07a9d85-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:86:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533003, 'reachable_time': 41237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259077, 'error': None, 'target': 'ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.516 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[e372a629-af42-441d-9e7b-378f976243d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 kernel: tape5040e37-a3: entered promiscuous mode
Oct 02 19:42:49 compute-0 NetworkManager[52324]: <info>  [1759434169.5363] manager: (tape5040e37-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Oct 02 19:42:49 compute-0 ovn_controller[97052]: 2025-10-02T19:42:49Z|00100|binding|INFO|Claiming lport e5040e37-a376-40c4-b891-5e45c03cb9d4 for this chassis.
Oct 02 19:42:49 compute-0 ovn_controller[97052]: 2025-10-02T19:42:49Z|00101|binding|INFO|e5040e37-a376-40c4-b891-5e45c03cb9d4: Claiming fa:16:3e:b8:b9:c8 10.100.0.5
Oct 02 19:42:49 compute-0 systemd-udevd[259061]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.551 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:b9:c8 10.100.0.5'], port_security=['fa:16:3e:b8:b9:c8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3aad9658-5f65-4eed-8b09-f453505c2d61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a776ef3132894c27a8bfaa390763de2a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd772d5fb-ebd6-4044-8e54-4e43ef5af6f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43cafc79-e8c0-4c28-812d-ca33028d228b, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=e5040e37-a376-40c4-b891-5e45c03cb9d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:42:49 compute-0 NetworkManager[52324]: <info>  [1759434169.5552] device (tape5040e37-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:42:49 compute-0 NetworkManager[52324]: <info>  [1759434169.5559] device (tape5040e37-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:42:49 compute-0 ovn_controller[97052]: 2025-10-02T19:42:49Z|00102|binding|INFO|Setting lport e5040e37-a376-40c4-b891-5e45c03cb9d4 ovn-installed in OVS
Oct 02 19:42:49 compute-0 ovn_controller[97052]: 2025-10-02T19:42:49Z|00103|binding|INFO|Setting lport e5040e37-a376-40c4-b891-5e45c03cb9d4 up in Southbound
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:49 compute-0 systemd-machined[154795]: New machine qemu-10-instance-00000009.
Oct 02 19:42:49 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000009.
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.602 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[77b84a1e-b1cf-49ca-b440-eecaad9a0f4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.604 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc07a9d85-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.604 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.605 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc07a9d85-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:49 compute-0 NetworkManager[52324]: <info>  [1759434169.6081] manager: (tapc07a9d85-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 02 19:42:49 compute-0 kernel: tapc07a9d85-90: entered promiscuous mode
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.611 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc07a9d85-90, col_values=(('external_ids', {'iface-id': '5a048b67-2936-4fb1-8322-b03194cd7ecb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:49 compute-0 ovn_controller[97052]: 2025-10-02T19:42:49Z|00104|binding|INFO|Releasing lport 5a048b67-2936-4fb1-8322-b03194cd7ecb from this chassis (sb_readonly=0)
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.630 105943 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c07a9d85-90af-47c3-a2ed-3103aaadb7da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c07a9d85-90af-47c3-a2ed-3103aaadb7da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.631 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[693c6796-7a4b-4e5b-a39a-caef09f9c5ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.632 105943 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: global
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     log         /dev/log local0 debug
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     log-tag     haproxy-metadata-proxy-c07a9d85-90af-47c3-a2ed-3103aaadb7da
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     user        root
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     group       root
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     maxconn     1024
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     pidfile     /var/lib/neutron/external/pids/c07a9d85-90af-47c3-a2ed-3103aaadb7da.pid.haproxy
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     daemon
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: defaults
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     log global
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     mode http
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     option httplog
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     option dontlognull
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     option http-server-close
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     option forwardfor
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     retries                 3
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     timeout http-request    30s
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     timeout connect         30s
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     timeout client          32s
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     timeout server          32s
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     timeout http-keep-alive 30s
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: listen listener
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     bind 169.254.169.254:80
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:     http-request add-header X-OVN-Network-ID c07a9d85-90af-47c3-a2ed-3103aaadb7da
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 19:42:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:49.635 105943 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da', 'env', 'PROCESS_TAG=haproxy-c07a9d85-90af-47c3-a2ed-3103aaadb7da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c07a9d85-90af-47c3-a2ed-3103aaadb7da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.693 2 DEBUG nova.compute.manager [req-b8e31ce4-97a5-4411-8bd1-3cb500993515 req-2a4f8c40-f0bb-4fa4-b759-92ede7485c13 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Received event network-vif-plugged-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.694 2 DEBUG oslo_concurrency.lockutils [req-b8e31ce4-97a5-4411-8bd1-3cb500993515 req-2a4f8c40-f0bb-4fa4-b759-92ede7485c13 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.694 2 DEBUG oslo_concurrency.lockutils [req-b8e31ce4-97a5-4411-8bd1-3cb500993515 req-2a4f8c40-f0bb-4fa4-b759-92ede7485c13 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.695 2 DEBUG oslo_concurrency.lockutils [req-b8e31ce4-97a5-4411-8bd1-3cb500993515 req-2a4f8c40-f0bb-4fa4-b759-92ede7485c13 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:49 compute-0 nova_compute[194781]: 2025-10-02 19:42:49.695 2 DEBUG nova.compute.manager [req-b8e31ce4-97a5-4411-8bd1-3cb500993515 req-2a4f8c40-f0bb-4fa4-b759-92ede7485c13 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Processing event network-vif-plugged-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:42:50 compute-0 podman[259136]: 2025-10-02 19:42:50.139037647 +0000 UTC m=+0.083220962 container create 1d92da7ae4fe38534e0d4bc7f582bfd88c880bfd9daeb4246489b649a6c34be3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:42:50 compute-0 systemd[1]: Started libpod-conmon-1d92da7ae4fe38534e0d4bc7f582bfd88c880bfd9daeb4246489b649a6c34be3.scope.
Oct 02 19:42:50 compute-0 podman[259136]: 2025-10-02 19:42:50.101656974 +0000 UTC m=+0.045840329 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:42:50 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec552415914de20eaac25976e537c7d3952ee93f9be482a053eb4906678aa5c7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 19:42:50 compute-0 podman[259136]: 2025-10-02 19:42:50.266611018 +0000 UTC m=+0.210794363 container init 1d92da7ae4fe38534e0d4bc7f582bfd88c880bfd9daeb4246489b649a6c34be3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 19:42:50 compute-0 podman[259136]: 2025-10-02 19:42:50.275131854 +0000 UTC m=+0.219315169 container start 1d92da7ae4fe38534e0d4bc7f582bfd88c880bfd9daeb4246489b649a6c34be3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:42:50 compute-0 neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da[259150]: [NOTICE]   (259154) : New worker (259156) forked
Oct 02 19:42:50 compute-0 neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da[259150]: [NOTICE]   (259154) : Loading success.
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.355 105943 INFO neutron.agent.ovn.metadata.agent [-] Port e5040e37-a376-40c4-b891-5e45c03cb9d4 in datapath b443ed89-b341-42c7-9f7d-f5f0acb8cd4d unbound from our chassis
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.357 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b443ed89-b341-42c7-9f7d-f5f0acb8cd4d
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.379 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[7d6be0b3-379a-40d6-a3b0-b6b93767746d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.380 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb443ed89-b1 in ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.382 246899 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb443ed89-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.382 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[396311d3-954a-46d3-8dad-36238e473f78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.384 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d5e8df-97ff-475f-a2b6-042c025fb5b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.396 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[0106721c-2343-4fdf-b021-1c85b5e2ff2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.417 2 DEBUG nova.network.neutron [req-e9961982-491e-4e99-a64b-56b5020758d9 req-57f3fe97-f8ab-44ea-85b9-14c9edc9baf2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Updated VIF entry in instance network info cache for port 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.418 2 DEBUG nova.network.neutron [req-e9961982-491e-4e99-a64b-56b5020758d9 req-57f3fe97-f8ab-44ea-85b9-14c9edc9baf2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Updating instance_info_cache with network_info: [{"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.429 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[a558891e-63b6-4a5f-9e94-956fab354523]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.435 2 DEBUG oslo_concurrency.lockutils [req-e9961982-491e-4e99-a64b-56b5020758d9 req-57f3fe97-f8ab-44ea-85b9-14c9edc9baf2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.460 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[8eff4ddb-b947-4930-a0a1-0b2f0b137c2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 NetworkManager[52324]: <info>  [1759434170.4724] manager: (tapb443ed89-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.472 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[bbeba3e4-920f-4f0b-95e9-6ebf4203736c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.517 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[28561b7b-8923-47cd-8c71-c58a38d97f97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.521 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[cc556c0d-5fd5-4131-b00c-f1489563460d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 NetworkManager[52324]: <info>  [1759434170.5556] device (tapb443ed89-b0): carrier: link connected
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.563 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[641ea1a3-5679-4dc9-a584-af9d508128be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.588 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab53e29-f434-45f9-8380-68e098609553]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb443ed89-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:99:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533117, 'reachable_time': 38400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259175, 'error': None, 'target': 'ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.617 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[93bfc2b7-caa4-4fd8-8e48-080f7b032d9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:9984'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533117, 'tstamp': 533117}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259176, 'error': None, 'target': 'ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.640 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[dc583f3c-2322-4be0-afa8-77449d82f11e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb443ed89-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:99:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533117, 'reachable_time': 38400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259177, 'error': None, 'target': 'ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.690 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[4c630f77-2482-4776-b20f-b11762cc2994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.733 2 DEBUG nova.network.neutron [req-d1472262-8ddb-48dc-ac78-a973462b544a req-40fcd410-ad8c-46eb-854a-218fb38e0471 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Updated VIF entry in instance network info cache for port e5040e37-a376-40c4-b891-5e45c03cb9d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.734 2 DEBUG nova.network.neutron [req-d1472262-8ddb-48dc-ac78-a973462b544a req-40fcd410-ad8c-46eb-854a-218fb38e0471 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Updating instance_info_cache with network_info: [{"id": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "address": "fa:16:3e:b8:b9:c8", "network": {"id": "b443ed89-b341-42c7-9f7d-f5f0acb8cd4d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1498077201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a776ef3132894c27a8bfaa390763de2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5040e37-a3", "ovs_interfaceid": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.750 2 DEBUG oslo_concurrency.lockutils [req-d1472262-8ddb-48dc-ac78-a973462b544a req-40fcd410-ad8c-46eb-854a-218fb38e0471 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-3aad9658-5f65-4eed-8b09-f453505c2d61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.751 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[a4deda70-8476-4380-b463-a97ffa468a28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.753 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb443ed89-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.753 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.753 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb443ed89-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:50 compute-0 NetworkManager[52324]: <info>  [1759434170.7562] manager: (tapb443ed89-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct 02 19:42:50 compute-0 kernel: tapb443ed89-b0: entered promiscuous mode
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.759 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb443ed89-b0, col_values=(('external_ids', {'iface-id': 'e2400528-3b0f-4000-a99b-374c7f338a66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:50 compute-0 ovn_controller[97052]: 2025-10-02T19:42:50Z|00105|binding|INFO|Releasing lport e2400528-3b0f-4000-a99b-374c7f338a66 from this chassis (sb_readonly=0)
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.791 105943 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b443ed89-b341-42c7-9f7d-f5f0acb8cd4d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b443ed89-b341-42c7-9f7d-f5f0acb8cd4d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.792 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f252bd-7e7f-4708-9b61-13d12c4c22d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.793 105943 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: global
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     log         /dev/log local0 debug
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     log-tag     haproxy-metadata-proxy-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     user        root
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     group       root
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     maxconn     1024
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     pidfile     /var/lib/neutron/external/pids/b443ed89-b341-42c7-9f7d-f5f0acb8cd4d.pid.haproxy
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     daemon
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: defaults
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     log global
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     mode http
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     option httplog
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     option dontlognull
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     option http-server-close
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     option forwardfor
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     retries                 3
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     timeout http-request    30s
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     timeout connect         30s
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     timeout client          32s
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     timeout server          32s
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     timeout http-keep-alive 30s
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: listen listener
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     bind 169.254.169.254:80
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:     http-request add-header X-OVN-Network-ID b443ed89-b341-42c7-9f7d-f5f0acb8cd4d
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 19:42:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:50.793 105943 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d', 'env', 'PROCESS_TAG=haproxy-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b443ed89-b341-42c7-9f7d-f5f0acb8cd4d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.809 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434170.8094614, fd018206-5b5d-4759-8481-a7dd68c01a2e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.810 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] VM Started (Lifecycle Event)
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.813 2 DEBUG nova.compute.manager [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.818 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.823 2 INFO nova.virt.libvirt.driver [-] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Instance spawned successfully.
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.824 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.828 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.834 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.847 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.848 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.848 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.849 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.850 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.850 2 DEBUG nova.virt.libvirt.driver [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.854 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.855 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434170.8096135, fd018206-5b5d-4759-8481-a7dd68c01a2e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.855 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] VM Paused (Lifecycle Event)
Oct 02 19:42:50 compute-0 sshd-session[258968]: Failed password for root from 193.46.255.20 port 59018 ssh2
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.882 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.887 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434170.8161278, fd018206-5b5d-4759-8481-a7dd68c01a2e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.887 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] VM Resumed (Lifecycle Event)
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.905 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.913 2 INFO nova.compute.manager [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Took 9.42 seconds to spawn the instance on the hypervisor.
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.914 2 DEBUG nova.compute.manager [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.915 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.946 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:50 compute-0 nova_compute[194781]: 2025-10-02 19:42:50.985 2 INFO nova.compute.manager [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Took 9.97 seconds to build instance.
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.008 2 DEBUG oslo_concurrency.lockutils [None req-31ea37e5-b41b-46b9-b3e4-b471558014bc c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:51 compute-0 podman[259217]: 2025-10-02 19:42:51.227368238 +0000 UTC m=+0.076838412 container create f105479bd8ac1453b66310d767ee049806258b61fbaa717912968ab973163df9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:42:51 compute-0 systemd[1]: Started libpod-conmon-f105479bd8ac1453b66310d767ee049806258b61fbaa717912968ab973163df9.scope.
Oct 02 19:42:51 compute-0 podman[259217]: 2025-10-02 19:42:51.182665051 +0000 UTC m=+0.032135255 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:42:51 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64165b8f6253dcc23d13706d8371b97f9fe7703e6f57b079cda715aafc56104/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 19:42:51 compute-0 podman[259217]: 2025-10-02 19:42:51.348456936 +0000 UTC m=+0.197927140 container init f105479bd8ac1453b66310d767ee049806258b61fbaa717912968ab973163df9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 19:42:51 compute-0 podman[259230]: 2025-10-02 19:42:51.351546968 +0000 UTC m=+0.082136963 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 19:42:51 compute-0 podman[259217]: 2025-10-02 19:42:51.357315262 +0000 UTC m=+0.206785456 container start f105479bd8ac1453b66310d767ee049806258b61fbaa717912968ab973163df9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:42:51 compute-0 neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d[259248]: [NOTICE]   (259271) : New worker (259275) forked
Oct 02 19:42:51 compute-0 neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d[259248]: [NOTICE]   (259271) : Loading success.
Oct 02 19:42:51 compute-0 podman[259231]: 2025-10-02 19:42:51.432815138 +0000 UTC m=+0.170564644 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.486 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434171.4863257, 3aad9658-5f65-4eed-8b09-f453505c2d61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.487 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] VM Started (Lifecycle Event)
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.509 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.513 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434171.4864154, 3aad9658-5f65-4eed-8b09-f453505c2d61 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.513 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] VM Paused (Lifecycle Event)
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.527 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.531 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.544 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:42:51 compute-0 unix_chkpwd[259288]: password check failed for user (root)
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.939 2 DEBUG nova.compute.manager [req-13d994bb-416b-4e48-8fde-723e773912c2 req-eb202850-2b74-409f-b215-c182443a33ae fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Received event network-vif-plugged-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.940 2 DEBUG oslo_concurrency.lockutils [req-13d994bb-416b-4e48-8fde-723e773912c2 req-eb202850-2b74-409f-b215-c182443a33ae fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.941 2 DEBUG oslo_concurrency.lockutils [req-13d994bb-416b-4e48-8fde-723e773912c2 req-eb202850-2b74-409f-b215-c182443a33ae fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.942 2 DEBUG oslo_concurrency.lockutils [req-13d994bb-416b-4e48-8fde-723e773912c2 req-eb202850-2b74-409f-b215-c182443a33ae fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.942 2 DEBUG nova.compute.manager [req-13d994bb-416b-4e48-8fde-723e773912c2 req-eb202850-2b74-409f-b215-c182443a33ae fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] No waiting events found dispatching network-vif-plugged-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:42:51 compute-0 nova_compute[194781]: 2025-10-02 19:42:51.943 2 WARNING nova.compute.manager [req-13d994bb-416b-4e48-8fde-723e773912c2 req-eb202850-2b74-409f-b215-c182443a33ae fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Received unexpected event network-vif-plugged-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 for instance with vm_state active and task_state None.
Oct 02 19:42:52 compute-0 nova_compute[194781]: 2025-10-02 19:42:52.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:53 compute-0 sshd-session[258968]: Failed password for root from 193.46.255.20 port 59018 ssh2
Oct 02 19:42:53 compute-0 nova_compute[194781]: 2025-10-02 19:42:53.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:55 compute-0 sshd-session[258968]: Received disconnect from 193.46.255.20 port 59018:11:  [preauth]
Oct 02 19:42:55 compute-0 sshd-session[258968]: Disconnected from authenticating user root 193.46.255.20 port 59018 [preauth]
Oct 02 19:42:55 compute-0 sshd-session[258968]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20  user=root
Oct 02 19:42:55 compute-0 nova_compute[194781]: 2025-10-02 19:42:55.482 2 DEBUG nova.compute.manager [req-262414c1-7e2e-4a33-8842-4e1f9e61953e req-5df1796c-c075-4e6d-9b6c-d9b9b4b55aef fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Received event network-changed-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:55 compute-0 nova_compute[194781]: 2025-10-02 19:42:55.483 2 DEBUG nova.compute.manager [req-262414c1-7e2e-4a33-8842-4e1f9e61953e req-5df1796c-c075-4e6d-9b6c-d9b9b4b55aef fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Refreshing instance network info cache due to event network-changed-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:42:55 compute-0 nova_compute[194781]: 2025-10-02 19:42:55.484 2 DEBUG oslo_concurrency.lockutils [req-262414c1-7e2e-4a33-8842-4e1f9e61953e req-5df1796c-c075-4e6d-9b6c-d9b9b4b55aef fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:42:55 compute-0 nova_compute[194781]: 2025-10-02 19:42:55.484 2 DEBUG oslo_concurrency.lockutils [req-262414c1-7e2e-4a33-8842-4e1f9e61953e req-5df1796c-c075-4e6d-9b6c-d9b9b4b55aef fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:42:55 compute-0 nova_compute[194781]: 2025-10-02 19:42:55.485 2 DEBUG nova.network.neutron [req-262414c1-7e2e-4a33-8842-4e1f9e61953e req-5df1796c-c075-4e6d-9b6c-d9b9b4b55aef fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Refreshing network info cache for port 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:42:55 compute-0 unix_chkpwd[259291]: password check failed for user (root)
Oct 02 19:42:55 compute-0 sshd-session[259289]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20  user=root
Oct 02 19:42:55 compute-0 nova_compute[194781]: 2025-10-02 19:42:55.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.592 2 DEBUG nova.compute.manager [req-f1c7da33-9108-470d-bd10-ed7fa6854042 req-e9937666-d3ef-4763-97a1-cb48020e23b7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Received event network-vif-plugged-e5040e37-a376-40c4-b891-5e45c03cb9d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.593 2 DEBUG oslo_concurrency.lockutils [req-f1c7da33-9108-470d-bd10-ed7fa6854042 req-e9937666-d3ef-4763-97a1-cb48020e23b7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "3aad9658-5f65-4eed-8b09-f453505c2d61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.593 2 DEBUG oslo_concurrency.lockutils [req-f1c7da33-9108-470d-bd10-ed7fa6854042 req-e9937666-d3ef-4763-97a1-cb48020e23b7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "3aad9658-5f65-4eed-8b09-f453505c2d61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.594 2 DEBUG oslo_concurrency.lockutils [req-f1c7da33-9108-470d-bd10-ed7fa6854042 req-e9937666-d3ef-4763-97a1-cb48020e23b7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "3aad9658-5f65-4eed-8b09-f453505c2d61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.594 2 DEBUG nova.compute.manager [req-f1c7da33-9108-470d-bd10-ed7fa6854042 req-e9937666-d3ef-4763-97a1-cb48020e23b7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Processing event network-vif-plugged-e5040e37-a376-40c4-b891-5e45c03cb9d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.594 2 DEBUG nova.compute.manager [req-f1c7da33-9108-470d-bd10-ed7fa6854042 req-e9937666-d3ef-4763-97a1-cb48020e23b7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Received event network-vif-plugged-e5040e37-a376-40c4-b891-5e45c03cb9d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.595 2 DEBUG oslo_concurrency.lockutils [req-f1c7da33-9108-470d-bd10-ed7fa6854042 req-e9937666-d3ef-4763-97a1-cb48020e23b7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "3aad9658-5f65-4eed-8b09-f453505c2d61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.595 2 DEBUG oslo_concurrency.lockutils [req-f1c7da33-9108-470d-bd10-ed7fa6854042 req-e9937666-d3ef-4763-97a1-cb48020e23b7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "3aad9658-5f65-4eed-8b09-f453505c2d61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.595 2 DEBUG oslo_concurrency.lockutils [req-f1c7da33-9108-470d-bd10-ed7fa6854042 req-e9937666-d3ef-4763-97a1-cb48020e23b7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "3aad9658-5f65-4eed-8b09-f453505c2d61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.596 2 DEBUG nova.compute.manager [req-f1c7da33-9108-470d-bd10-ed7fa6854042 req-e9937666-d3ef-4763-97a1-cb48020e23b7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] No waiting events found dispatching network-vif-plugged-e5040e37-a376-40c4-b891-5e45c03cb9d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.596 2 WARNING nova.compute.manager [req-f1c7da33-9108-470d-bd10-ed7fa6854042 req-e9937666-d3ef-4763-97a1-cb48020e23b7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Received unexpected event network-vif-plugged-e5040e37-a376-40c4-b891-5e45c03cb9d4 for instance with vm_state building and task_state spawning.
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.600 2 DEBUG nova.compute.manager [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.606 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434177.6061695, 3aad9658-5f65-4eed-8b09-f453505c2d61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.607 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] VM Resumed (Lifecycle Event)
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.608 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.613 2 INFO nova.virt.libvirt.driver [-] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Instance spawned successfully.
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.614 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.643 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.653 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.658 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.658 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.659 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.659 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.660 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.660 2 DEBUG nova.virt.libvirt.driver [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:42:57 compute-0 sshd-session[259289]: Failed password for root from 193.46.255.20 port 57268 ssh2
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.700 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.726 2 INFO nova.compute.manager [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Took 15.95 seconds to spawn the instance on the hypervisor.
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.726 2 DEBUG nova.compute.manager [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.790 2 INFO nova.compute.manager [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Took 16.74 seconds to build instance.
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.806 2 DEBUG oslo_concurrency.lockutils [None req-de5625c8-e2a9-4f78-8146-f89c189d8381 eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "3aad9658-5f65-4eed-8b09-f453505c2d61" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.889 2 DEBUG nova.network.neutron [req-262414c1-7e2e-4a33-8842-4e1f9e61953e req-5df1796c-c075-4e6d-9b6c-d9b9b4b55aef fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Updated VIF entry in instance network info cache for port 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.889 2 DEBUG nova.network.neutron [req-262414c1-7e2e-4a33-8842-4e1f9e61953e req-5df1796c-c075-4e6d-9b6c-d9b9b4b55aef fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Updating instance_info_cache with network_info: [{"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:42:57 compute-0 nova_compute[194781]: 2025-10-02 19:42:57.910 2 DEBUG oslo_concurrency.lockutils [req-262414c1-7e2e-4a33-8842-4e1f9e61953e req-5df1796c-c075-4e6d-9b6c-d9b9b4b55aef fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:42:58 compute-0 nova_compute[194781]: 2025-10-02 19:42:58.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:58 compute-0 podman[259292]: 2025-10-02 19:42:58.744077283 +0000 UTC m=+0.106677436 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:42:59 compute-0 unix_chkpwd[259315]: password check failed for user (root)
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.359 2 DEBUG oslo_concurrency.lockutils [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Acquiring lock "3aad9658-5f65-4eed-8b09-f453505c2d61" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.360 2 DEBUG oslo_concurrency.lockutils [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "3aad9658-5f65-4eed-8b09-f453505c2d61" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.360 2 DEBUG oslo_concurrency.lockutils [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Acquiring lock "3aad9658-5f65-4eed-8b09-f453505c2d61-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.361 2 DEBUG oslo_concurrency.lockutils [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "3aad9658-5f65-4eed-8b09-f453505c2d61-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.361 2 DEBUG oslo_concurrency.lockutils [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "3aad9658-5f65-4eed-8b09-f453505c2d61-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.362 2 INFO nova.compute.manager [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Terminating instance
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.363 2 DEBUG nova.compute.manager [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:42:59 compute-0 kernel: tape5040e37-a3 (unregistering): left promiscuous mode
Oct 02 19:42:59 compute-0 NetworkManager[52324]: <info>  [1759434179.3936] device (tape5040e37-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:59 compute-0 ovn_controller[97052]: 2025-10-02T19:42:59Z|00106|binding|INFO|Releasing lport e5040e37-a376-40c4-b891-5e45c03cb9d4 from this chassis (sb_readonly=0)
Oct 02 19:42:59 compute-0 ovn_controller[97052]: 2025-10-02T19:42:59Z|00107|binding|INFO|Setting lport e5040e37-a376-40c4-b891-5e45c03cb9d4 down in Southbound
Oct 02 19:42:59 compute-0 ovn_controller[97052]: 2025-10-02T19:42:59Z|00108|binding|INFO|Removing iface tape5040e37-a3 ovn-installed in OVS
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:59 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:59.413 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:b9:c8 10.100.0.5'], port_security=['fa:16:3e:b8:b9:c8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3aad9658-5f65-4eed-8b09-f453505c2d61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a776ef3132894c27a8bfaa390763de2a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd772d5fb-ebd6-4044-8e54-4e43ef5af6f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43cafc79-e8c0-4c28-812d-ca33028d228b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=e5040e37-a376-40c4-b891-5e45c03cb9d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:42:59 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:59.414 105943 INFO neutron.agent.ovn.metadata.agent [-] Port e5040e37-a376-40c4-b891-5e45c03cb9d4 in datapath b443ed89-b341-42c7-9f7d-f5f0acb8cd4d unbound from our chassis
Oct 02 19:42:59 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:59.416 105943 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b443ed89-b341-42c7-9f7d-f5f0acb8cd4d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 19:42:59 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:59.417 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[abec9020-fd86-41d2-9243-57110598611b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:59 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:59.418 105943 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d namespace which is not needed anymore
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:59 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 02 19:42:59 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000009.scope: Consumed 3.342s CPU time.
Oct 02 19:42:59 compute-0 systemd-machined[154795]: Machine qemu-10-instance-00000009 terminated.
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:59 compute-0 neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d[259248]: [NOTICE]   (259271) : haproxy version is 2.8.14-c23fe91
Oct 02 19:42:59 compute-0 neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d[259248]: [NOTICE]   (259271) : path to executable is /usr/sbin/haproxy
Oct 02 19:42:59 compute-0 neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d[259248]: [WARNING]  (259271) : Exiting Master process...
Oct 02 19:42:59 compute-0 neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d[259248]: [ALERT]    (259271) : Current worker (259275) exited with code 143 (Terminated)
Oct 02 19:42:59 compute-0 neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d[259248]: [WARNING]  (259271) : All workers exited. Exiting... (0)
Oct 02 19:42:59 compute-0 systemd[1]: libpod-f105479bd8ac1453b66310d767ee049806258b61fbaa717912968ab973163df9.scope: Deactivated successfully.
Oct 02 19:42:59 compute-0 podman[259338]: 2025-10-02 19:42:59.611480932 +0000 UTC m=+0.079927615 container died f105479bd8ac1453b66310d767ee049806258b61fbaa717912968ab973163df9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.634 2 INFO nova.virt.libvirt.driver [-] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Instance destroyed successfully.
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.634 2 DEBUG nova.objects.instance [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lazy-loading 'resources' on Instance uuid 3aad9658-5f65-4eed-8b09-f453505c2d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.652 2 DEBUG nova.virt.libvirt.vif [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:42:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1810216507',display_name='tempest-ServerAddressesTestJSON-server-1810216507',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1810216507',id=9,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:42:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a776ef3132894c27a8bfaa390763de2a',ramdisk_id='',reservation_id='r-yqbfw7nn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1342472581',owner_user_name='tempest-ServerAddressesTestJSON-1342472581-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:42:57Z,user_data=None,user_id='eefe39d7484540c99c7e4ac98c03cf24',uuid=3aad9658-5f65-4eed-8b09-f453505c2d61,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "address": "fa:16:3e:b8:b9:c8", "network": {"id": "b443ed89-b341-42c7-9f7d-f5f0acb8cd4d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1498077201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a776ef3132894c27a8bfaa390763de2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5040e37-a3", "ovs_interfaceid": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.652 2 DEBUG nova.network.os_vif_util [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Converting VIF {"id": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "address": "fa:16:3e:b8:b9:c8", "network": {"id": "b443ed89-b341-42c7-9f7d-f5f0acb8cd4d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1498077201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a776ef3132894c27a8bfaa390763de2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5040e37-a3", "ovs_interfaceid": "e5040e37-a376-40c4-b891-5e45c03cb9d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.653 2 DEBUG nova.network.os_vif_util [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b9:c8,bridge_name='br-int',has_traffic_filtering=True,id=e5040e37-a376-40c4-b891-5e45c03cb9d4,network=Network(b443ed89-b341-42c7-9f7d-f5f0acb8cd4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5040e37-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.654 2 DEBUG os_vif [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b9:c8,bridge_name='br-int',has_traffic_filtering=True,id=e5040e37-a376-40c4-b891-5e45c03cb9d4,network=Network(b443ed89-b341-42c7-9f7d-f5f0acb8cd4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5040e37-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.655 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5040e37-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f105479bd8ac1453b66310d767ee049806258b61fbaa717912968ab973163df9-userdata-shm.mount: Deactivated successfully.
Oct 02 19:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b64165b8f6253dcc23d13706d8371b97f9fe7703e6f57b079cda715aafc56104-merged.mount: Deactivated successfully.
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.664 2 INFO os_vif [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b9:c8,bridge_name='br-int',has_traffic_filtering=True,id=e5040e37-a376-40c4-b891-5e45c03cb9d4,network=Network(b443ed89-b341-42c7-9f7d-f5f0acb8cd4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5040e37-a3')
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.664 2 INFO nova.virt.libvirt.driver [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Deleting instance files /var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61_del
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.665 2 INFO nova.virt.libvirt.driver [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Deletion of /var/lib/nova/instances/3aad9658-5f65-4eed-8b09-f453505c2d61_del complete
Oct 02 19:42:59 compute-0 podman[259338]: 2025-10-02 19:42:59.674940628 +0000 UTC m=+0.143387301 container cleanup f105479bd8ac1453b66310d767ee049806258b61fbaa717912968ab973163df9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:42:59 compute-0 systemd[1]: libpod-conmon-f105479bd8ac1453b66310d767ee049806258b61fbaa717912968ab973163df9.scope: Deactivated successfully.
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.709 2 INFO nova.compute.manager [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Took 0.35 seconds to destroy the instance on the hypervisor.
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.709 2 DEBUG oslo.service.loopingcall [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.710 2 DEBUG nova.compute.manager [-] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.710 2 DEBUG nova.network.neutron [-] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:42:59 compute-0 podman[209015]: time="2025-10-02T19:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:42:59 compute-0 podman[259380]: 2025-10-02 19:42:59.7642102 +0000 UTC m=+0.052709061 container remove f105479bd8ac1453b66310d767ee049806258b61fbaa717912968ab973163df9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:42:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35673 "" "Go-http-client/1.1"
Oct 02 19:42:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6152 "" "Go-http-client/1.1"
Oct 02 19:42:59 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:59.785 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[d0563fc7-fda8-4041-8a3c-9b099e52a511]: (4, ('Thu Oct  2 07:42:59 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d (f105479bd8ac1453b66310d767ee049806258b61fbaa717912968ab973163df9)\nf105479bd8ac1453b66310d767ee049806258b61fbaa717912968ab973163df9\nThu Oct  2 07:42:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d (f105479bd8ac1453b66310d767ee049806258b61fbaa717912968ab973163df9)\nf105479bd8ac1453b66310d767ee049806258b61fbaa717912968ab973163df9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:59 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:59.787 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[961173fd-7645-44f6-bf01-bd8ab4a71470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:59 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:59.788 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb443ed89-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:42:59 compute-0 kernel: tapb443ed89-b0: left promiscuous mode
Oct 02 19:42:59 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:59.806 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[b514b629-2eb6-459e-a969-7090b5e92ba9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:59 compute-0 nova_compute[194781]: 2025-10-02 19:42:59.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:42:59 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:59.830 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[a80cbb1a-c5a9-4dc9-b2f4-691f09f6594e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:59 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:59.832 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[88a2f2af-0785-4374-8e48-7b91bd0c488a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:59 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:59.848 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[89c0777c-1344-41d7-a2ce-77bcce44795c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533107, 'reachable_time': 16142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259393, 'error': None, 'target': 'ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:42:59 compute-0 systemd[1]: run-netns-ovnmeta\x2db443ed89\x2db341\x2d42c7\x2d9f7d\x2df5f0acb8cd4d.mount: Deactivated successfully.
Oct 02 19:42:59 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:59.853 106060 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b443ed89-b341-42c7-9f7d-f5f0acb8cd4d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 19:42:59 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:42:59.853 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2acdd3-6a4c-4898-a548-68f3feef2036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:00 compute-0 nova_compute[194781]: 2025-10-02 19:43:00.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:01 compute-0 nova_compute[194781]: 2025-10-02 19:43:01.154 2 DEBUG nova.network.neutron [-] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:43:01 compute-0 nova_compute[194781]: 2025-10-02 19:43:01.180 2 INFO nova.compute.manager [-] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Took 1.47 seconds to deallocate network for instance.
Oct 02 19:43:01 compute-0 nova_compute[194781]: 2025-10-02 19:43:01.223 2 DEBUG oslo_concurrency.lockutils [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:01 compute-0 nova_compute[194781]: 2025-10-02 19:43:01.224 2 DEBUG oslo_concurrency.lockutils [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:01 compute-0 nova_compute[194781]: 2025-10-02 19:43:01.267 2 DEBUG nova.compute.manager [req-96de5607-07b6-4a20-886b-de3e35a98416 req-340fab1a-410a-40b4-8311-88c1340a51bb fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Received event network-vif-deleted-e5040e37-a376-40c4-b891-5e45c03cb9d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:43:01 compute-0 nova_compute[194781]: 2025-10-02 19:43:01.363 2 DEBUG nova.compute.provider_tree [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:43:01 compute-0 sshd-session[259289]: Failed password for root from 193.46.255.20 port 57268 ssh2
Oct 02 19:43:01 compute-0 openstack_network_exporter[211160]: ERROR   19:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:43:01 compute-0 openstack_network_exporter[211160]: ERROR   19:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:43:01 compute-0 openstack_network_exporter[211160]: ERROR   19:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:43:01 compute-0 openstack_network_exporter[211160]: ERROR   19:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:43:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:43:01 compute-0 openstack_network_exporter[211160]: ERROR   19:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:43:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:43:01 compute-0 nova_compute[194781]: 2025-10-02 19:43:01.505 2 DEBUG nova.scheduler.client.report [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:43:01 compute-0 nova_compute[194781]: 2025-10-02 19:43:01.529 2 DEBUG oslo_concurrency.lockutils [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:01 compute-0 nova_compute[194781]: 2025-10-02 19:43:01.562 2 INFO nova.scheduler.client.report [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Deleted allocations for instance 3aad9658-5f65-4eed-8b09-f453505c2d61
Oct 02 19:43:01 compute-0 nova_compute[194781]: 2025-10-02 19:43:01.621 2 DEBUG oslo_concurrency.lockutils [None req-5241e3ca-0e1e-4e8a-b6b7-990d9a24a8ac eefe39d7484540c99c7e4ac98c03cf24 a776ef3132894c27a8bfaa390763de2a - - default default] Lock "3aad9658-5f65-4eed-8b09-f453505c2d61" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:02 compute-0 unix_chkpwd[259394]: password check failed for user (root)
Oct 02 19:43:03 compute-0 sshd-session[259289]: Failed password for root from 193.46.255.20 port 57268 ssh2
Oct 02 19:43:04 compute-0 nova_compute[194781]: 2025-10-02 19:43:04.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:04 compute-0 nova_compute[194781]: 2025-10-02 19:43:04.954 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:04 compute-0 nova_compute[194781]: 2025-10-02 19:43:04.955 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:04 compute-0 nova_compute[194781]: 2025-10-02 19:43:04.969 2 DEBUG nova.compute.manager [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.022 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.022 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.031 2 DEBUG nova.virt.hardware [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.031 2 INFO nova.compute.claims [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.341 2 DEBUG nova.compute.provider_tree [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.356 2 DEBUG nova.scheduler.client.report [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:43:05 compute-0 sshd-session[259289]: Received disconnect from 193.46.255.20 port 57268:11:  [preauth]
Oct 02 19:43:05 compute-0 sshd-session[259289]: Disconnected from authenticating user root 193.46.255.20 port 57268 [preauth]
Oct 02 19:43:05 compute-0 sshd-session[259289]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.20  user=root
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.374 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.374 2 DEBUG nova.compute.manager [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.410 2 DEBUG nova.compute.manager [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.411 2 DEBUG nova.network.neutron [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.425 2 INFO nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.439 2 DEBUG nova.compute.manager [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.511 2 DEBUG nova.compute.manager [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.513 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.513 2 INFO nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Creating image(s)
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.514 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "/var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.514 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "/var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.515 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "/var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.515 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.515 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:05 compute-0 nova_compute[194781]: 2025-10-02 19:43:05.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:06 compute-0 nova_compute[194781]: 2025-10-02 19:43:06.036 2 DEBUG nova.policy [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '23b5415980f24bbbbfa331c702f6f7d9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3dae65399d7c47999282bff6664f6d16', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 19:43:06 compute-0 nova_compute[194781]: 2025-10-02 19:43:06.784 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:06 compute-0 nova_compute[194781]: 2025-10-02 19:43:06.844 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e.part --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:06 compute-0 nova_compute[194781]: 2025-10-02 19:43:06.845 2 DEBUG nova.virt.images [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] b43dc593-d176-449d-a8d5-95d53b8e1b5e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 19:43:06 compute-0 nova_compute[194781]: 2025-10-02 19:43:06.878 2 DEBUG nova.privsep.utils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 19:43:06 compute-0 nova_compute[194781]: 2025-10-02 19:43:06.879 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e.part /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.118 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e.part /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e.converted" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.124 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.198 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e.converted --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.200 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.220 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.278 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.280 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.280 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.298 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.355 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.356 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e,backing_fmt=raw /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.398 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e,backing_fmt=raw /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.399 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.400 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.454 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.455 2 DEBUG nova.virt.disk.api [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Checking if we can resize image /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.456 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.517 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.519 2 DEBUG nova.virt.disk.api [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Cannot resize image /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.520 2 DEBUG nova.objects.instance [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lazy-loading 'migration_context' on Instance uuid f0ac40ea-f3c9-4981-ba99-bfbf34bd253a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.538 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.539 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Ensure instance console log exists: /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.540 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.541 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.541 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:07 compute-0 podman[259423]: 2025-10-02 19:43:07.733092419 +0000 UTC m=+0.093363442 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:43:07 compute-0 podman[259424]: 2025-10-02 19:43:07.763534108 +0000 UTC m=+0.087053254 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:43:07 compute-0 nova_compute[194781]: 2025-10-02 19:43:07.823 2 DEBUG nova.network.neutron [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Successfully created port: 45b53db0-b1f5-401e-8a98-c127ada04a9c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 19:43:09 compute-0 nova_compute[194781]: 2025-10-02 19:43:09.494 2 DEBUG nova.network.neutron [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Successfully updated port: 45b53db0-b1f5-401e-8a98-c127ada04a9c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:43:09 compute-0 nova_compute[194781]: 2025-10-02 19:43:09.553 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:43:09 compute-0 nova_compute[194781]: 2025-10-02 19:43:09.554 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquired lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:43:09 compute-0 nova_compute[194781]: 2025-10-02 19:43:09.554 2 DEBUG nova.network.neutron [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:43:09 compute-0 nova_compute[194781]: 2025-10-02 19:43:09.661 2 DEBUG nova.compute.manager [req-992a4b72-4467-4b1c-be71-44ee50851eec req-dace922c-edff-43c9-9204-91954cfcc14a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Received event network-changed-45b53db0-b1f5-401e-8a98-c127ada04a9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:43:09 compute-0 nova_compute[194781]: 2025-10-02 19:43:09.661 2 DEBUG nova.compute.manager [req-992a4b72-4467-4b1c-be71-44ee50851eec req-dace922c-edff-43c9-9204-91954cfcc14a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Refreshing instance network info cache due to event network-changed-45b53db0-b1f5-401e-8a98-c127ada04a9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:43:09 compute-0 nova_compute[194781]: 2025-10-02 19:43:09.662 2 DEBUG oslo_concurrency.lockutils [req-992a4b72-4467-4b1c-be71-44ee50851eec req-dace922c-edff-43c9-9204-91954cfcc14a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:43:09 compute-0 nova_compute[194781]: 2025-10-02 19:43:09.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:09 compute-0 nova_compute[194781]: 2025-10-02 19:43:09.790 2 DEBUG nova.network.neutron [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:43:10 compute-0 ovn_controller[97052]: 2025-10-02T19:43:10Z|00109|binding|INFO|Releasing lport bd80466a-6146-45a7-be35-ec332e1ee93c from this chassis (sb_readonly=0)
Oct 02 19:43:10 compute-0 ovn_controller[97052]: 2025-10-02T19:43:10Z|00110|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:43:10 compute-0 ovn_controller[97052]: 2025-10-02T19:43:10Z|00111|binding|INFO|Releasing lport 5a048b67-2936-4fb1-8322-b03194cd7ecb from this chassis (sb_readonly=0)
Oct 02 19:43:10 compute-0 nova_compute[194781]: 2025-10-02 19:43:10.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:10 compute-0 nova_compute[194781]: 2025-10-02 19:43:10.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:10 compute-0 nova_compute[194781]: 2025-10-02 19:43:10.985 2 DEBUG nova.network.neutron [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Updating instance_info_cache with network_info: [{"id": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "address": "fa:16:3e:e2:c6:bd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45b53db0-b1", "ovs_interfaceid": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.004 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Releasing lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.006 2 DEBUG nova.compute.manager [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Instance network_info: |[{"id": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "address": "fa:16:3e:e2:c6:bd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45b53db0-b1", "ovs_interfaceid": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.007 2 DEBUG oslo_concurrency.lockutils [req-992a4b72-4467-4b1c-be71-44ee50851eec req-dace922c-edff-43c9-9204-91954cfcc14a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.008 2 DEBUG nova.network.neutron [req-992a4b72-4467-4b1c-be71-44ee50851eec req-dace922c-edff-43c9-9204-91954cfcc14a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Refreshing network info cache for port 45b53db0-b1f5-401e-8a98-c127ada04a9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.014 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Start _get_guest_xml network_info=[{"id": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "address": "fa:16:3e:e2:c6:bd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45b53db0-b1", "ovs_interfaceid": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:42:55Z,direct_url=<?>,disk_format='qcow2',id=b43dc593-d176-449d-a8d5-95d53b8e1b5e,min_disk=0,min_ram=0,name='tempest-scenario-img--1036197514',owner='3dae65399d7c47999282bff6664f6d16',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:42:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': 'b43dc593-d176-449d-a8d5-95d53b8e1b5e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.027 2 WARNING nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.040 2 DEBUG nova.virt.libvirt.host [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.041 2 DEBUG nova.virt.libvirt.host [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.047 2 DEBUG nova.virt.libvirt.host [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.048 2 DEBUG nova.virt.libvirt.host [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.048 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.049 2 DEBUG nova.virt.hardware [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:40:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7ab5ea96-81dd-4496-8a1f-012f7d2c53c5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:42:55Z,direct_url=<?>,disk_format='qcow2',id=b43dc593-d176-449d-a8d5-95d53b8e1b5e,min_disk=0,min_ram=0,name='tempest-scenario-img--1036197514',owner='3dae65399d7c47999282bff6664f6d16',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:42:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.050 2 DEBUG nova.virt.hardware [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.051 2 DEBUG nova.virt.hardware [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.051 2 DEBUG nova.virt.hardware [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.052 2 DEBUG nova.virt.hardware [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.053 2 DEBUG nova.virt.hardware [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.053 2 DEBUG nova.virt.hardware [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.054 2 DEBUG nova.virt.hardware [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.055 2 DEBUG nova.virt.hardware [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.055 2 DEBUG nova.virt.hardware [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.056 2 DEBUG nova.virt.hardware [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.060 2 DEBUG nova.virt.libvirt.vif [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:43:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg',id=11,image_ref='b43dc593-d176-449d-a8d5-95d53b8e1b5e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='d4713e41-6620-49a4-8665-1b2fbe664d9c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3dae65399d7c47999282bff6664f6d16',ramdisk_id='',reservation_id='r-35d7ip07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b43dc593-d176-449d-a8d5-95d53b8e1b5e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-732152950',owner_user_name='tempest-PrometheusGabbiTest-732152950-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:43:05Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='23b5415980f24bbbbfa331c702f6f7d9',uuid=f0ac40ea-f3c9-4981-ba99-bfbf34bd253a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "address": "fa:16:3e:e2:c6:bd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45b53db0-b1", "ovs_interfaceid": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.061 2 DEBUG nova.network.os_vif_util [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Converting VIF {"id": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "address": "fa:16:3e:e2:c6:bd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45b53db0-b1", "ovs_interfaceid": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.063 2 DEBUG nova.network.os_vif_util [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:c6:bd,bridge_name='br-int',has_traffic_filtering=True,id=45b53db0-b1f5-401e-8a98-c127ada04a9c,network=Network(b8407621-6f3e-4864-b018-8cf0d0e8428e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45b53db0-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.064 2 DEBUG nova.objects.instance [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lazy-loading 'pci_devices' on Instance uuid f0ac40ea-f3c9-4981-ba99-bfbf34bd253a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.082 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:43:11 compute-0 nova_compute[194781]:   <uuid>f0ac40ea-f3c9-4981-ba99-bfbf34bd253a</uuid>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   <name>instance-0000000b</name>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   <memory>131072</memory>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <nova:name>te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg</nova:name>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:43:11</nova:creationTime>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <nova:flavor name="m1.nano">
Oct 02 19:43:11 compute-0 nova_compute[194781]:         <nova:memory>128</nova:memory>
Oct 02 19:43:11 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:43:11 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:43:11 compute-0 nova_compute[194781]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 19:43:11 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:43:11 compute-0 nova_compute[194781]:         <nova:user uuid="23b5415980f24bbbbfa331c702f6f7d9">tempest-PrometheusGabbiTest-732152950-project-member</nova:user>
Oct 02 19:43:11 compute-0 nova_compute[194781]:         <nova:project uuid="3dae65399d7c47999282bff6664f6d16">tempest-PrometheusGabbiTest-732152950</nova:project>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="b43dc593-d176-449d-a8d5-95d53b8e1b5e"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:43:11 compute-0 nova_compute[194781]:         <nova:port uuid="45b53db0-b1f5-401e-8a98-c127ada04a9c">
Oct 02 19:43:11 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="10.100.2.28" ipVersion="4"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <system>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <entry name="serial">f0ac40ea-f3c9-4981-ba99-bfbf34bd253a</entry>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <entry name="uuid">f0ac40ea-f3c9-4981-ba99-bfbf34bd253a</entry>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     </system>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   <os>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   </os>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   <features>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   </features>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.config"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:e2:c6:bd"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <target dev="tap45b53db0-b1"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/console.log" append="off"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <video>
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     </video>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:43:11 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:43:11 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:43:11 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:43:11 compute-0 nova_compute[194781]: </domain>
Oct 02 19:43:11 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.095 2 DEBUG nova.compute.manager [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Preparing to wait for external event network-vif-plugged-45b53db0-b1f5-401e-8a98-c127ada04a9c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.096 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.096 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.097 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.097 2 DEBUG nova.virt.libvirt.vif [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:43:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg',id=11,image_ref='b43dc593-d176-449d-a8d5-95d53b8e1b5e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='d4713e41-6620-49a4-8665-1b2fbe664d9c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3dae65399d7c47999282bff6664f6d16',ramdisk_id='',reservation_id='r-35d7ip07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b43dc593-d176-449d-a8d5-95d53b8e1b5e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-732152950',owner_user_name='tempest-PrometheusGabbiTest-732152950-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:43:05Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='23b5415980f24bbbbfa331c702f6f7d9',uuid=f0ac40ea-f3c9-4981-ba99-bfbf34bd253a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "address": "fa:16:3e:e2:c6:bd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45b53db0-b1", "ovs_interfaceid": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.099 2 DEBUG nova.network.os_vif_util [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Converting VIF {"id": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "address": "fa:16:3e:e2:c6:bd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45b53db0-b1", "ovs_interfaceid": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.100 2 DEBUG nova.network.os_vif_util [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:c6:bd,bridge_name='br-int',has_traffic_filtering=True,id=45b53db0-b1f5-401e-8a98-c127ada04a9c,network=Network(b8407621-6f3e-4864-b018-8cf0d0e8428e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45b53db0-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.101 2 DEBUG os_vif [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:c6:bd,bridge_name='br-int',has_traffic_filtering=True,id=45b53db0-b1f5-401e-8a98-c127ada04a9c,network=Network(b8407621-6f3e-4864-b018-8cf0d0e8428e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45b53db0-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.104 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.110 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45b53db0-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.111 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap45b53db0-b1, col_values=(('external_ids', {'iface-id': '45b53db0-b1f5-401e-8a98-c127ada04a9c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:c6:bd', 'vm-uuid': 'f0ac40ea-f3c9-4981-ba99-bfbf34bd253a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:11 compute-0 NetworkManager[52324]: <info>  [1759434191.1143] manager: (tap45b53db0-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.125 2 INFO os_vif [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:c6:bd,bridge_name='br-int',has_traffic_filtering=True,id=45b53db0-b1f5-401e-8a98-c127ada04a9c,network=Network(b8407621-6f3e-4864-b018-8cf0d0e8428e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45b53db0-b1')
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.198 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.198 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.198 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] No VIF found with MAC fa:16:3e:e2:c6:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.199 2 INFO nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Using config drive
Oct 02 19:43:11 compute-0 podman[259463]: 2025-10-02 19:43:11.258922772 +0000 UTC m=+0.093848584 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.)
Oct 02 19:43:11 compute-0 podman[259464]: 2025-10-02 19:43:11.291758025 +0000 UTC m=+0.109740237 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, maintainer=Red Hat, Inc., release=1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vendor=Red Hat, Inc.)
Oct 02 19:43:11 compute-0 podman[259465]: 2025-10-02 19:43:11.316329328 +0000 UTC m=+0.123302948 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.860 2 INFO nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Creating config drive at /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.config
Oct 02 19:43:11 compute-0 nova_compute[194781]: 2025-10-02 19:43:11.870 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz6yxq0ph execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:12 compute-0 nova_compute[194781]: 2025-10-02 19:43:12.000 2 DEBUG oslo_concurrency.processutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz6yxq0ph" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:12 compute-0 kernel: tap45b53db0-b1: entered promiscuous mode
Oct 02 19:43:12 compute-0 nova_compute[194781]: 2025-10-02 19:43:12.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:12 compute-0 NetworkManager[52324]: <info>  [1759434192.1014] manager: (tap45b53db0-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Oct 02 19:43:12 compute-0 nova_compute[194781]: 2025-10-02 19:43:12.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:12 compute-0 ovn_controller[97052]: 2025-10-02T19:43:12Z|00112|binding|INFO|Claiming lport 45b53db0-b1f5-401e-8a98-c127ada04a9c for this chassis.
Oct 02 19:43:12 compute-0 ovn_controller[97052]: 2025-10-02T19:43:12Z|00113|binding|INFO|45b53db0-b1f5-401e-8a98-c127ada04a9c: Claiming fa:16:3e:e2:c6:bd 10.100.2.28
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.142 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:c6:bd 10.100.2.28'], port_security=['fa:16:3e:e2:c6:bd 10.100.2.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.28/16', 'neutron:device_id': 'f0ac40ea-f3c9-4981-ba99-bfbf34bd253a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3dae65399d7c47999282bff6664f6d16', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb16109a-6359-4dd8-bfae-0a7015239961', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31c9bff4-971d-41c4-a82c-3f2067f94d21, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=45b53db0-b1f5-401e-8a98-c127ada04a9c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.144 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 45b53db0-b1f5-401e-8a98-c127ada04a9c in datapath b8407621-6f3e-4864-b018-8cf0d0e8428e bound to our chassis
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.146 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8407621-6f3e-4864-b018-8cf0d0e8428e
Oct 02 19:43:12 compute-0 systemd-machined[154795]: New machine qemu-11-instance-0000000b.
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.167 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[0d989473-e7b8-4458-8684-a7ad3202776f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.168 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb8407621-61 in ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 19:43:12 compute-0 ovn_controller[97052]: 2025-10-02T19:43:12Z|00114|binding|INFO|Setting lport 45b53db0-b1f5-401e-8a98-c127ada04a9c ovn-installed in OVS
Oct 02 19:43:12 compute-0 ovn_controller[97052]: 2025-10-02T19:43:12Z|00115|binding|INFO|Setting lport 45b53db0-b1f5-401e-8a98-c127ada04a9c up in Southbound
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.170 246899 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb8407621-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.170 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[447a878f-9075-4200-a050-12115f16fced]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 nova_compute[194781]: 2025-10-02 19:43:12.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.171 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[75ec7406-accd-428f-9245-142a65080756]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.190 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9cc638-2ad9-4b3f-b787-ea620b41a4be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 systemd-udevd[259539]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.218 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1ba8b7-e3fb-49dc-82e0-993c6fbcfaef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 NetworkManager[52324]: <info>  [1759434192.2348] device (tap45b53db0-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:43:12 compute-0 NetworkManager[52324]: <info>  [1759434192.2359] device (tap45b53db0-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.250 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[a0597bce-2b5a-4749-8291-093065d03871]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.259 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e9d9f5-a350-47c5-ba31-0790f5668816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 NetworkManager[52324]: <info>  [1759434192.2601] manager: (tapb8407621-60): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Oct 02 19:43:12 compute-0 systemd-udevd[259544]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.313 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[49bde4bf-0382-4cba-888e-942387530647]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.316 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[5e148b43-edb4-4f66-98df-1ff82cc2369a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 NetworkManager[52324]: <info>  [1759434192.3461] device (tapb8407621-60): carrier: link connected
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.362 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[8a276e13-813e-4b80-8b21-d7de0c13c8bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.388 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad8f2ba-6794-4bbe-b8bf-6a96506a8122]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8407621-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:a6:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535296, 'reachable_time': 30073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259569, 'error': None, 'target': 'ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.417 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[07816377-854e-4f2c-82b3-68e17a856edc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe45:a65c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535296, 'tstamp': 535296}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259570, 'error': None, 'target': 'ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.434 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[cce045cc-0e94-4e66-b5a1-22359f458109]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8407621-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:a6:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535296, 'reachable_time': 30073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259571, 'error': None, 'target': 'ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.486 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[847f8f46-6298-4436-9d27-54e7b0fa3bbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.586 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa0108f-5b71-4add-9481-de19e1ff9554]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.588 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8407621-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.588 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.589 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8407621-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:12 compute-0 NetworkManager[52324]: <info>  [1759434192.5938] manager: (tapb8407621-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Oct 02 19:43:12 compute-0 nova_compute[194781]: 2025-10-02 19:43:12.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:12 compute-0 kernel: tapb8407621-60: entered promiscuous mode
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.601 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8407621-60, col_values=(('external_ids', {'iface-id': 'aaa6ea3c-0164-44d4-b435-0c6c04e73e3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:12 compute-0 ovn_controller[97052]: 2025-10-02T19:43:12Z|00116|binding|INFO|Releasing lport aaa6ea3c-0164-44d4-b435-0c6c04e73e3f from this chassis (sb_readonly=0)
Oct 02 19:43:12 compute-0 nova_compute[194781]: 2025-10-02 19:43:12.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.630 105943 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b8407621-6f3e-4864-b018-8cf0d0e8428e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b8407621-6f3e-4864-b018-8cf0d0e8428e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.632 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[771662b5-68a8-41c1-9e26-c1c1b2de523a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.633 105943 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: global
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     log         /dev/log local0 debug
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     log-tag     haproxy-metadata-proxy-b8407621-6f3e-4864-b018-8cf0d0e8428e
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     user        root
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     group       root
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     maxconn     1024
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     pidfile     /var/lib/neutron/external/pids/b8407621-6f3e-4864-b018-8cf0d0e8428e.pid.haproxy
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     daemon
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: defaults
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     log global
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     mode http
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     option httplog
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     option dontlognull
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     option http-server-close
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     option forwardfor
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     retries                 3
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     timeout http-request    30s
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     timeout connect         30s
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     timeout client          32s
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     timeout server          32s
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     timeout http-keep-alive 30s
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: listen listener
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     bind 169.254.169.254:80
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:     http-request add-header X-OVN-Network-ID b8407621-6f3e-4864-b018-8cf0d0e8428e
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 19:43:12 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:12.634 105943 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'env', 'PROCESS_TAG=haproxy-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b8407621-6f3e-4864-b018-8cf0d0e8428e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 19:43:12 compute-0 nova_compute[194781]: 2025-10-02 19:43:12.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:12 compute-0 nova_compute[194781]: 2025-10-02 19:43:12.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.946 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.947 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.954 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 6eada58a-d077-43e5-ab40-dd45abbe38f3 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 19:43:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:12.955 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/6eada58a-d077-43e5-ab40-dd45abbe38f3 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}7d00fd7b3129404772d7b3eeaef94222e4d12fdb730378deac028178d031ce80" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 19:43:13 compute-0 nova_compute[194781]: 2025-10-02 19:43:13.000 2 DEBUG nova.network.neutron [req-992a4b72-4467-4b1c-be71-44ee50851eec req-dace922c-edff-43c9-9204-91954cfcc14a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Updated VIF entry in instance network info cache for port 45b53db0-b1f5-401e-8a98-c127ada04a9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:43:13 compute-0 nova_compute[194781]: 2025-10-02 19:43:13.001 2 DEBUG nova.network.neutron [req-992a4b72-4467-4b1c-be71-44ee50851eec req-dace922c-edff-43c9-9204-91954cfcc14a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Updating instance_info_cache with network_info: [{"id": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "address": "fa:16:3e:e2:c6:bd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45b53db0-b1", "ovs_interfaceid": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:43:13 compute-0 nova_compute[194781]: 2025-10-02 19:43:13.022 2 DEBUG oslo_concurrency.lockutils [req-992a4b72-4467-4b1c-be71-44ee50851eec req-dace922c-edff-43c9-9204-91954cfcc14a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:43:13 compute-0 podman[259610]: 2025-10-02 19:43:13.182330864 +0000 UTC m=+0.094354968 container create 598f63771c34ef8b26126b689a6108478c0c7bea94650a424cf64cadf13ebd10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:43:13 compute-0 podman[259610]: 2025-10-02 19:43:13.133136007 +0000 UTC m=+0.045160131 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:43:13 compute-0 systemd[1]: Started libpod-conmon-598f63771c34ef8b26126b689a6108478c0c7bea94650a424cf64cadf13ebd10.scope.
Oct 02 19:43:13 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1189b86e3b7293ab1cf09fb5a8365c543d53b9dc719a4e9a0da0c0154f63f1b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:13 compute-0 podman[259610]: 2025-10-02 19:43:13.299484218 +0000 UTC m=+0.211508352 container init 598f63771c34ef8b26126b689a6108478c0c7bea94650a424cf64cadf13ebd10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:43:13 compute-0 podman[259610]: 2025-10-02 19:43:13.307611004 +0000 UTC m=+0.219635108 container start 598f63771c34ef8b26126b689a6108478c0c7bea94650a424cf64cadf13ebd10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 19:43:13 compute-0 neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e[259625]: [NOTICE]   (259629) : New worker (259631) forked
Oct 02 19:43:13 compute-0 neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e[259625]: [NOTICE]   (259629) : Loading success.
Oct 02 19:43:13 compute-0 nova_compute[194781]: 2025-10-02 19:43:13.557 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434193.5564992, f0ac40ea-f3c9-4981-ba99-bfbf34bd253a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:43:13 compute-0 nova_compute[194781]: 2025-10-02 19:43:13.561 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] VM Started (Lifecycle Event)
Oct 02 19:43:13 compute-0 nova_compute[194781]: 2025-10-02 19:43:13.584 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:43:13 compute-0 nova_compute[194781]: 2025-10-02 19:43:13.589 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434193.5566933, f0ac40ea-f3c9-4981-ba99-bfbf34bd253a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:43:13 compute-0 nova_compute[194781]: 2025-10-02 19:43:13.590 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] VM Paused (Lifecycle Event)
Oct 02 19:43:13 compute-0 nova_compute[194781]: 2025-10-02 19:43:13.615 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:43:13 compute-0 nova_compute[194781]: 2025-10-02 19:43:13.620 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:43:13 compute-0 nova_compute[194781]: 2025-10-02 19:43:13.642 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:43:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:14.081 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1980 Content-Type: application/json Date: Thu, 02 Oct 2025 19:43:12 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-756dd8ca-705b-4c95-b796-3571357c1f03 x-openstack-request-id: req-756dd8ca-705b-4c95-b796-3571357c1f03 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 19:43:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:14.082 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "6eada58a-d077-43e5-ab40-dd45abbe38f3", "name": "tempest-ServerActionsTestJSON-server-1950508224", "status": "ACTIVE", "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "user_id": "1de0891a14a8410da559e3197c8ff98b", "metadata": {}, "hostId": "36ddf7efe56b7a8a024f26690e23a834ddeed3a02da86b8b0a1d3360", "image": {"id": "c191839f-7364-41ce-80c8-eff8077fc750", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/c191839f-7364-41ce-80c8-eff8077fc750"}]}, "flavor": {"id": "7ab5ea96-81dd-4496-8a1f-012f7d2c53c5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/7ab5ea96-81dd-4496-8a1f-012f7d2c53c5"}]}, "created": "2025-10-02T19:41:46Z", "updated": "2025-10-02T19:42:07Z", "addresses": {"tempest-ServerActionsTestJSON-575966371-network": [{"version": 4, "addr": "10.100.0.3", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:15:84:0f"}, {"version": 4, "addr": "192.168.122.244", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:15:84:0f"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/6eada58a-d077-43e5-ab40-dd45abbe38f3"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/6eada58a-d077-43e5-ab40-dd45abbe38f3"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1857372306", "OS-SRV-USG:launched_at": "2025-10-02T19:42:07.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1923392189"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 19:43:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:14.082 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/6eada58a-d077-43e5-ab40-dd45abbe38f3 used request id req-756dd8ca-705b-4c95-b796-3571357c1f03 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 19:43:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:14.086 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '6eada58a-d077-43e5-ab40-dd45abbe38f3', 'name': 'tempest-ServerActionsTestJSON-server-1950508224', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c191839f-7364-41ce-80c8-eff8077fc750'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '5d458e53358c4398b6ba6051d5c82805', 'user_id': '1de0891a14a8410da559e3197c8ff98b', 'hostId': '36ddf7efe56b7a8a024f26690e23a834ddeed3a02da86b8b0a1d3360', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:43:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:14.090 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 19:43:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:14.092 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}7d00fd7b3129404772d7b3eeaef94222e4d12fdb730378deac028178d031ce80" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 19:43:14 compute-0 nova_compute[194781]: 2025-10-02 19:43:14.626 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759434179.6250923, 3aad9658-5f65-4eed-8b09-f453505c2d61 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:43:14 compute-0 nova_compute[194781]: 2025-10-02 19:43:14.628 2 INFO nova.compute.manager [-] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] VM Stopped (Lifecycle Event)
Oct 02 19:43:14 compute-0 nova_compute[194781]: 2025-10-02 19:43:14.646 2 DEBUG nova.compute.manager [None req-1b1b2a84-e7e3-4c06-870a-ea2f836641e7 - - - - - -] [instance: 3aad9658-5f65-4eed-8b09-f453505c2d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.127 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1691 Content-Type: application/json Date: Thu, 02 Oct 2025 19:43:14 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ad4e0eaa-6ca8-41ea-b05c-298b648976b3 x-openstack-request-id: req-ad4e0eaa-6ca8-41ea-b05c-298b648976b3 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.128 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a", "name": "te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg", "status": "BUILD", "tenant_id": "3dae65399d7c47999282bff6664f6d16", "user_id": "23b5415980f24bbbbfa331c702f6f7d9", "metadata": {"metering.server_group": "d4713e41-6620-49a4-8665-1b2fbe664d9c"}, "hostId": "298cf1af4dee135a9d0b3050937724c6c926b466f9f6516cf98c662a", "image": {"id": "b43dc593-d176-449d-a8d5-95d53b8e1b5e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/b43dc593-d176-449d-a8d5-95d53b8e1b5e"}]}, "flavor": {"id": "7ab5ea96-81dd-4496-8a1f-012f7d2c53c5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/7ab5ea96-81dd-4496-8a1f-012f7d2c53c5"}]}, "created": "2025-10-02T19:43:03Z", "updated": "2025-10-02T19:43:05Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "key_name": null, "OS-SRV-USG:launched_at": null, "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "spawning", "OS-EXT-STS:vm_state": "building", "OS-EXT-STS:power_state": 0, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.128 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a used request id req-ad4e0eaa-6ca8-41ea-b05c-298b648976b3 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.129 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f0ac40ea-f3c9-4981-ba99-bfbf34bd253a', 'name': 'te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b43dc593-d176-449d-a8d5-95d53b8e1b5e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'paused', 'tenant_id': '3dae65399d7c47999282bff6664f6d16', 'user_id': '23b5415980f24bbbbfa331c702f6f7d9', 'hostId': '298cf1af4dee135a9d0b3050937724c6c926b466f9f6516cf98c662a', 'status': 'paused', 'metadata': {'metering.server_group': 'd4713e41-6620-49a4-8665-1b2fbe664d9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.134 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.139 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance fd018206-5b5d-4759-8481-a7dd68c01a2e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.140 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/fd018206-5b5d-4759-8481-a7dd68c01a2e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}7d00fd7b3129404772d7b3eeaef94222e4d12fdb730378deac028178d031ce80" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.604 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1998 Content-Type: application/json Date: Thu, 02 Oct 2025 19:43:15 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-068ea4d3-d2a1-4195-a1d5-cb8f7fffe458 x-openstack-request-id: req-068ea4d3-d2a1-4195-a1d5-cb8f7fffe458 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.605 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "fd018206-5b5d-4759-8481-a7dd68c01a2e", "name": "tempest-AttachInterfacesUnderV243Test-server-1258340398", "status": "ACTIVE", "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "user_id": "c5d286f2c6fa49b2bded7a673c5a9d52", "metadata": {}, "hostId": "9d21ba019e3d3aedca2e0557611802277f43679d35eb30f4cf2311a6", "image": {"id": "c191839f-7364-41ce-80c8-eff8077fc750", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/c191839f-7364-41ce-80c8-eff8077fc750"}]}, "flavor": {"id": "7ab5ea96-81dd-4496-8a1f-012f7d2c53c5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/7ab5ea96-81dd-4496-8a1f-012f7d2c53c5"}]}, "created": "2025-10-02T19:42:39Z", "updated": "2025-10-02T19:42:50Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-1217982617-network": [{"version": 4, "addr": "10.100.0.12", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:81:87:ef"}, {"version": 4, "addr": "192.168.122.206", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:81:87:ef"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/fd018206-5b5d-4759-8481-a7dd68c01a2e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/fd018206-5b5d-4759-8481-a7dd68c01a2e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1402289596", "OS-SRV-USG:launched_at": "2025-10-02T19:42:50.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1058386137"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.605 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/fd018206-5b5d-4759-8481-a7dd68c01a2e used request id req-068ea4d3-d2a1-4195-a1d5-cb8f7fffe458 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.606 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd018206-5b5d-4759-8481-a7dd68c01a2e', 'name': 'tempest-AttachInterfacesUnderV243Test-server-1258340398', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c191839f-7364-41ce-80c8-eff8077fc750'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bfa01cf9d3eb4388bef0e350af472762', 'user_id': 'c5d286f2c6fa49b2bded7a673c5a9d52', 'hostId': '9d21ba019e3d3aedca2e0557611802277f43679d35eb30f4cf2311a6', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.607 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.607 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:43:15.607422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.639 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/cpu volume: 34100000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.675 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/cpu volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.711 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 52390000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.749 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/cpu volume: 24390000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.750 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.750 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.750 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.750 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.751 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.751 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.751 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/memory.usage volume: 41.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.751 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.751 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a: ceilometer.compute.pollsters.NoVolumeException
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.751 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.752 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:43:15.751164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.752 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.752 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance fd018206-5b5d-4759-8481-a7dd68c01a2e: ceilometer.compute.pollsters.NoVolumeException
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.752 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.752 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.753 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.753 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.753 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.753 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.753 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:43:15.753405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.757 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 6eada58a-d077-43e5-ab40-dd45abbe38f3 / tapb27e7b6f-4a inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.757 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.762 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for f0ac40ea-f3c9-4981-ba99-bfbf34bd253a / tap45b53db0-b1 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.762 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.766 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.770 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for fd018206-5b5d-4759-8481-a7dd68c01a2e / tap93a8e2fd-ae inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.771 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.771 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.771 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.771 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.772 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.772 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.772 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.772 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/network.incoming.bytes volume: 4643 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.772 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.bytes volume: 110 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.773 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:43:15.772242) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.773 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.773 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/network.incoming.bytes volume: 110 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.773 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.774 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.774 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.774 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.774 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.774 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.774 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.775 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/power.state volume: 3 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.775 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.775 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.776 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:43:15.774741) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.776 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.776 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.776 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.777 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.777 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.777 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.777 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:43:15.777089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.777 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.777 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.778 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.778 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.779 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.779 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.779 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.779 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.779 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/network.outgoing.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.780 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.780 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.780 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.781 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.781 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:43:15.779566) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.781 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.781 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.781 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.782 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.782 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.782 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-02T19:43:15.782054) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.782 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1950508224>, <NovaLikeServer: te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1258340398>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1950508224>, <NovaLikeServer: te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1258340398>]
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.783 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.783 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.783 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.783 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.784 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:43:15.783378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.826 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.read.bytes volume: 29604352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.826 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.864 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.865 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.933 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.934 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.934 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 nova_compute[194781]: 2025-10-02 19:43:15.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.971 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.971 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.972 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.972 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.972 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.972 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.972 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.972 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.972 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.972 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.973 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.973 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.973 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.974 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.974 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.974 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:43:15.972565) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.974 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.974 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/network.outgoing.bytes volume: 3456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.974 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.974 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.975 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.975 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.975 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.975 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.975 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.975 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:43:15.974301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.976 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:43:15.975903) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.997 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:15 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:15.997 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.018 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.018 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.055 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.056 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.056 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.083 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.083 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.085 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.085 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.085 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.read.latency volume: 1101621293 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.085 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.read.latency volume: 101375269 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.086 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.086 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.086 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.087 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.087 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.087 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.read.latency volume: 881831146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.088 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.read.latency volume: 1042048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.089 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.089 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:43:16.085370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.091 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-02T19:43:16.090776) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.091 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1950508224>, <NovaLikeServer: te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1258340398>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1950508224>, <NovaLikeServer: te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1258340398>]
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.092 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.092 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.092 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.092 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.092 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.read.requests volume: 1066 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.093 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.093 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.093 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:43:16.092325) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.093 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.094 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.094 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.095 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.095 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.095 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.096 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.097 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.097 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.097 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.097 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.098 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.098 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.099 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.099 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.100 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.101 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.101 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.101 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.102 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.102 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.102 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.103 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.103 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:43:16.097911) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.104 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:43:16.101372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.104 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.105 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.106 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.106 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.106 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.write.bytes volume: 72986624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.107 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.108 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.108 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:43:16.106659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.108 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.108 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.109 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.109 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.109 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.110 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.110 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.110 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.110 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.111 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.111 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.write.latency volume: 4121724995 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.111 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.111 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.112 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.112 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.113 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.113 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.113 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.114 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 nova_compute[194781]: 2025-10-02 19:43:16.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.114 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.114 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.115 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.115 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.115 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.115 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.116 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.116 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:43:16.111064) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.116 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.116 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.117 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.117 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.117 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.118 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.119 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.119 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.119 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.120 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.write.requests volume: 326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.120 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.120 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.120 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.121 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.121 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.121 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:43:16.115269) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.121 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:43:16.119852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.122 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.123 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.123 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.123 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.123 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.124 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.124 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:43:16.123546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.125 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.125 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:43:16.125558) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.127 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.128 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.128 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.128 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:43:16.127316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.129 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.129 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.129 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.130 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.130 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.130 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.130 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.130 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.130 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.131 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.131 14 DEBUG ceilometer.compute.pollsters [-] 6eada58a-d077-43e5-ab40-dd45abbe38f3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.131 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.131 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.132 14 DEBUG ceilometer.compute.pollsters [-] fd018206-5b5d-4759-8481-a7dd68c01a2e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.132 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:43:16.128782) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:43:16.131029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:43:16.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:43:18 compute-0 nova_compute[194781]: 2025-10-02 19:43:18.214 2 DEBUG oslo_concurrency.lockutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquiring lock "6eada58a-d077-43e5-ab40-dd45abbe38f3" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:18 compute-0 nova_compute[194781]: 2025-10-02 19:43:18.215 2 DEBUG oslo_concurrency.lockutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:18 compute-0 nova_compute[194781]: 2025-10-02 19:43:18.216 2 INFO nova.compute.manager [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Rebooting instance
Oct 02 19:43:18 compute-0 nova_compute[194781]: 2025-10-02 19:43:18.241 2 DEBUG oslo_concurrency.lockutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquiring lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:43:18 compute-0 nova_compute[194781]: 2025-10-02 19:43:18.241 2 DEBUG oslo_concurrency.lockutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquired lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:43:18 compute-0 nova_compute[194781]: 2025-10-02 19:43:18.242 2 DEBUG nova.network.neutron [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:43:18 compute-0 podman[259640]: 2025-10-02 19:43:18.726528764 +0000 UTC m=+0.097715977 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:43:18 compute-0 podman[259641]: 2025-10-02 19:43:18.74363041 +0000 UTC m=+0.107559909 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:43:19 compute-0 nova_compute[194781]: 2025-10-02 19:43:19.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.160 2 DEBUG nova.network.neutron [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Updating instance_info_cache with network_info: [{"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.184 2 DEBUG oslo_concurrency.lockutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Releasing lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.187 2 DEBUG nova.compute.manager [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:43:20 compute-0 kernel: tapb27e7b6f-4a (unregistering): left promiscuous mode
Oct 02 19:43:20 compute-0 NetworkManager[52324]: <info>  [1759434200.3234] device (tapb27e7b6f-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:43:20 compute-0 ovn_controller[97052]: 2025-10-02T19:43:20Z|00117|binding|INFO|Releasing lport b27e7b6f-4ab7-48d9-a674-eb640289b746 from this chassis (sb_readonly=0)
Oct 02 19:43:20 compute-0 ovn_controller[97052]: 2025-10-02T19:43:20Z|00118|binding|INFO|Setting lport b27e7b6f-4ab7-48d9-a674-eb640289b746 down in Southbound
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:20 compute-0 ovn_controller[97052]: 2025-10-02T19:43:20Z|00119|binding|INFO|Removing iface tapb27e7b6f-4a ovn-installed in OVS
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:20 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:20.355 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:84:0f 10.100.0.3'], port_security=['fa:16:3e:15:84:0f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6eada58a-d077-43e5-ab40-dd45abbe38f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d458e53358c4398b6ba6051d5c82805', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d169388-279d-4835-af73-74628348527d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61d3b384-7807-48c7-ac4b-e6e147bd5ac4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=b27e7b6f-4ab7-48d9-a674-eb640289b746) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:43:20 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:20.358 105943 INFO neutron.agent.ovn.metadata.agent [-] Port b27e7b6f-4ab7-48d9-a674-eb640289b746 in datapath a4e44b64-c472-49fb-ac29-fcbb65fb1bdc unbound from our chassis
Oct 02 19:43:20 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:20.363 105943 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a4e44b64-c472-49fb-ac29-fcbb65fb1bdc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 19:43:20 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:20.364 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[6481a685-0fb7-4bfb-a87a-aa6f4019d327]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:20 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:20.366 105943 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc namespace which is not needed anymore
Oct 02 19:43:20 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 02 19:43:20 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000006.scope: Consumed 41.755s CPU time.
Oct 02 19:43:20 compute-0 systemd-machined[154795]: Machine qemu-7-instance-00000006 terminated.
Oct 02 19:43:20 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[258282]: [NOTICE]   (258303) : haproxy version is 2.8.14-c23fe91
Oct 02 19:43:20 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[258282]: [NOTICE]   (258303) : path to executable is /usr/sbin/haproxy
Oct 02 19:43:20 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[258282]: [WARNING]  (258303) : Exiting Master process...
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.561 2 INFO nova.virt.libvirt.driver [-] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Instance destroyed successfully.
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.564 2 DEBUG nova.objects.instance [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lazy-loading 'resources' on Instance uuid 6eada58a-d077-43e5-ab40-dd45abbe38f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:43:20 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[258282]: [ALERT]    (258303) : Current worker (258305) exited with code 143 (Terminated)
Oct 02 19:43:20 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[258282]: [WARNING]  (258303) : All workers exited. Exiting... (0)
Oct 02 19:43:20 compute-0 systemd[1]: libpod-c6312fda8f42166a6ae354b7a658446ae29ef93bb84a6bedc4d7d22b8afe7294.scope: Deactivated successfully.
Oct 02 19:43:20 compute-0 podman[259704]: 2025-10-02 19:43:20.579322647 +0000 UTC m=+0.090312389 container died c6312fda8f42166a6ae354b7a658446ae29ef93bb84a6bedc4d7d22b8afe7294 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.584 2 DEBUG nova.virt.libvirt.vif [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950508224',display_name='tempest-ServerActionsTestJSON-server-1950508224',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950508224',id=6,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQ3/bi48ARS3VXn9iWcKo/JXrKXcAcgt+LOQWkb1k3Pe3wzNtwmWDod3uxRQb5Dp+at+GfgNvvsZcS9q05pPmKjxF66rj7w8mLvCmgF8foOmp3mBcRf5ivcSaS/PCliQ==',key_name='tempest-keypair-1857372306',keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:42:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5d458e53358c4398b6ba6051d5c82805',ramdisk_id='',reservation_id='r-80w0dyeq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-897514974',owner_user_name='tempest-ServerActionsTestJSON-897514974-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:43:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1de0891a14a8410da559e3197c8ff98b',uuid=6eada58a-d077-43e5-ab40-dd45abbe38f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.585 2 DEBUG nova.network.os_vif_util [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Converting VIF {"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.586 2 DEBUG nova.network.os_vif_util [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:84:0f,bridge_name='br-int',has_traffic_filtering=True,id=b27e7b6f-4ab7-48d9-a674-eb640289b746,network=Network(a4e44b64-c472-49fb-ac29-fcbb65fb1bdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27e7b6f-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.589 2 DEBUG os_vif [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:84:0f,bridge_name='br-int',has_traffic_filtering=True,id=b27e7b6f-4ab7-48d9-a674-eb640289b746,network=Network(a4e44b64-c472-49fb-ac29-fcbb65fb1bdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27e7b6f-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.592 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb27e7b6f-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.603 2 INFO os_vif [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:84:0f,bridge_name='br-int',has_traffic_filtering=True,id=b27e7b6f-4ab7-48d9-a674-eb640289b746,network=Network(a4e44b64-c472-49fb-ac29-fcbb65fb1bdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27e7b6f-4a')
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.611 2 DEBUG nova.virt.libvirt.driver [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Start _get_guest_xml network_info=[{"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': 'c191839f-7364-41ce-80c8-eff8077fc750'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.617 2 WARNING nova.virt.libvirt.driver [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.624 2 DEBUG nova.virt.libvirt.host [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.625 2 DEBUG nova.virt.libvirt.host [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.630 2 DEBUG nova.virt.libvirt.host [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:43:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6312fda8f42166a6ae354b7a658446ae29ef93bb84a6bedc4d7d22b8afe7294-userdata-shm.mount: Deactivated successfully.
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.634 2 DEBUG nova.virt.libvirt.host [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.635 2 DEBUG nova.virt.libvirt.driver [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.635 2 DEBUG nova.virt.hardware [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:40:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7ab5ea96-81dd-4496-8a1f-012f7d2c53c5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.636 2 DEBUG nova.virt.hardware [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.636 2 DEBUG nova.virt.hardware [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.637 2 DEBUG nova.virt.hardware [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.637 2 DEBUG nova.virt.hardware [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.638 2 DEBUG nova.virt.hardware [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.638 2 DEBUG nova.virt.hardware [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:43:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e66001d246673b0665425e2087498d4517c4034ffe6806f92b679c8f5f88a61a-merged.mount: Deactivated successfully.
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.641 2 DEBUG nova.virt.hardware [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.641 2 DEBUG nova.virt.hardware [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.641 2 DEBUG nova.virt.hardware [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.642 2 DEBUG nova.virt.hardware [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.642 2 DEBUG nova.objects.instance [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6eada58a-d077-43e5-ab40-dd45abbe38f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:43:20 compute-0 podman[259704]: 2025-10-02 19:43:20.655140419 +0000 UTC m=+0.166130121 container cleanup c6312fda8f42166a6ae354b7a658446ae29ef93bb84a6bedc4d7d22b8afe7294 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 19:43:20 compute-0 systemd[1]: libpod-conmon-c6312fda8f42166a6ae354b7a658446ae29ef93bb84a6bedc4d7d22b8afe7294.scope: Deactivated successfully.
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.675 2 DEBUG oslo_concurrency.processutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:20 compute-0 podman[259750]: 2025-10-02 19:43:20.74035334 +0000 UTC m=+0.052915061 container remove c6312fda8f42166a6ae354b7a658446ae29ef93bb84a6bedc4d7d22b8afe7294 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.746 2 DEBUG oslo_concurrency.processutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.config --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.747 2 DEBUG oslo_concurrency.lockutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquiring lock "/var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.748 2 DEBUG oslo_concurrency.lockutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "/var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:20 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:20.747 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[4ecb4338-3b94-46a0-abf9-2693b2ea4749]: (4, ('Thu Oct  2 07:43:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc (c6312fda8f42166a6ae354b7a658446ae29ef93bb84a6bedc4d7d22b8afe7294)\nc6312fda8f42166a6ae354b7a658446ae29ef93bb84a6bedc4d7d22b8afe7294\nThu Oct  2 07:43:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc (c6312fda8f42166a6ae354b7a658446ae29ef93bb84a6bedc4d7d22b8afe7294)\nc6312fda8f42166a6ae354b7a658446ae29ef93bb84a6bedc4d7d22b8afe7294\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.748 2 DEBUG oslo_concurrency.lockutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "/var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:20 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:20.748 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[a8bc5d91-ef8f-4318-b6b1-972655fc1e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:20 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:20.749 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4e44b64-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.749 2 DEBUG nova.virt.libvirt.vif [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950508224',display_name='tempest-ServerActionsTestJSON-server-1950508224',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950508224',id=6,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQ3/bi48ARS3VXn9iWcKo/JXrKXcAcgt+LOQWkb1k3Pe3wzNtwmWDod3uxRQb5Dp+at+GfgNvvsZcS9q05pPmKjxF66rj7w8mLvCmgF8foOmp3mBcRf5ivcSaS/PCliQ==',key_name='tempest-keypair-1857372306',keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:42:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5d458e53358c4398b6ba6051d5c82805',ramdisk_id='',reservation_id='r-80w0dyeq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-897514974',owner_user_name='tempest-ServerActionsTestJSON-897514974-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:43:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1de0891a14a8410da559e3197c8ff98b',uuid=6eada58a-d077-43e5-ab40-dd45abbe38f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.749 2 DEBUG nova.network.os_vif_util [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Converting VIF {"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.750 2 DEBUG nova.network.os_vif_util [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:84:0f,bridge_name='br-int',has_traffic_filtering=True,id=b27e7b6f-4ab7-48d9-a674-eb640289b746,network=Network(a4e44b64-c472-49fb-ac29-fcbb65fb1bdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27e7b6f-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:43:20 compute-0 kernel: tapa4e44b64-c0: left promiscuous mode
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.752 2 DEBUG nova.objects.instance [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6eada58a-d077-43e5-ab40-dd45abbe38f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:20 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:20.768 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[e1dfce8e-51bd-43b3-a7c9-c27e8cdb752f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.780 2 DEBUG nova.virt.libvirt.driver [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:43:20 compute-0 nova_compute[194781]:   <uuid>6eada58a-d077-43e5-ab40-dd45abbe38f3</uuid>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   <name>instance-00000006</name>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   <memory>131072</memory>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <nova:name>tempest-ServerActionsTestJSON-server-1950508224</nova:name>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:43:20</nova:creationTime>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <nova:flavor name="m1.nano">
Oct 02 19:43:20 compute-0 nova_compute[194781]:         <nova:memory>128</nova:memory>
Oct 02 19:43:20 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:43:20 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:43:20 compute-0 nova_compute[194781]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 19:43:20 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:43:20 compute-0 nova_compute[194781]:         <nova:user uuid="1de0891a14a8410da559e3197c8ff98b">tempest-ServerActionsTestJSON-897514974-project-member</nova:user>
Oct 02 19:43:20 compute-0 nova_compute[194781]:         <nova:project uuid="5d458e53358c4398b6ba6051d5c82805">tempest-ServerActionsTestJSON-897514974</nova:project>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="c191839f-7364-41ce-80c8-eff8077fc750"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:43:20 compute-0 nova_compute[194781]:         <nova:port uuid="b27e7b6f-4ab7-48d9-a674-eb640289b746">
Oct 02 19:43:20 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <system>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <entry name="serial">6eada58a-d077-43e5-ab40-dd45abbe38f3</entry>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <entry name="uuid">6eada58a-d077-43e5-ab40-dd45abbe38f3</entry>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     </system>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   <os>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   </os>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   <features>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   </features>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk.config"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:15:84:0f"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <target dev="tapb27e7b6f-4a"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/console.log" append="off"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <video>
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     </video>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <input type="keyboard" bus="usb"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:43:20 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:43:20 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:43:20 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:43:20 compute-0 nova_compute[194781]: </domain>
Oct 02 19:43:20 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.782 2 DEBUG oslo_concurrency.processutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:20 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:20.791 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[2221a2ee-bfed-43b4-ae01-1b6b2047b916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:20 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:20.794 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6b86c2-2854-49a2-a714-e9b756ff9e60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:20 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:20.811 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[11ab03ff-4309-48d5-9455-4992b2ee983b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527932, 'reachable_time': 41151, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259768, 'error': None, 'target': 'ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:20 compute-0 systemd[1]: run-netns-ovnmeta\x2da4e44b64\x2dc472\x2d49fb\x2dac29\x2dfcbb65fb1bdc.mount: Deactivated successfully.
Oct 02 19:43:20 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:20.816 106060 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 19:43:20 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:20.816 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd83938-6395-46b4-8ade-d025f0f5f34c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.852 2 DEBUG oslo_concurrency.processutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.854 2 DEBUG oslo_concurrency.processutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.915 2 DEBUG oslo_concurrency.processutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.917 2 DEBUG nova.objects.instance [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6eada58a-d077-43e5-ab40-dd45abbe38f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.933 2 DEBUG oslo_concurrency.processutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.992 2 DEBUG oslo_concurrency.processutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.994 2 DEBUG nova.virt.disk.api [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Checking if we can resize image /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:43:20 compute-0 nova_compute[194781]: 2025-10-02 19:43:20.994 2 DEBUG oslo_concurrency.processutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.056 2 DEBUG oslo_concurrency.processutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.058 2 DEBUG nova.virt.disk.api [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Cannot resize image /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.059 2 DEBUG nova.objects.instance [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lazy-loading 'migration_context' on Instance uuid 6eada58a-d077-43e5-ab40-dd45abbe38f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.086 2 DEBUG nova.virt.libvirt.vif [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950508224',display_name='tempest-ServerActionsTestJSON-server-1950508224',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950508224',id=6,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQ3/bi48ARS3VXn9iWcKo/JXrKXcAcgt+LOQWkb1k3Pe3wzNtwmWDod3uxRQb5Dp+at+GfgNvvsZcS9q05pPmKjxF66rj7w8mLvCmgF8foOmp3mBcRf5ivcSaS/PCliQ==',key_name='tempest-keypair-1857372306',keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:42:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='5d458e53358c4398b6ba6051d5c82805',ramdisk_id='',reservation_id='r-80w0dyeq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-897514974',owner_user_name='tempest-ServerActionsTestJSON-897514974-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:43:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1de0891a14a8410da559e3197c8ff98b',uuid=6eada58a-d077-43e5-ab40-dd45abbe38f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.087 2 DEBUG nova.network.os_vif_util [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Converting VIF {"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.089 2 DEBUG nova.network.os_vif_util [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:84:0f,bridge_name='br-int',has_traffic_filtering=True,id=b27e7b6f-4ab7-48d9-a674-eb640289b746,network=Network(a4e44b64-c472-49fb-ac29-fcbb65fb1bdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27e7b6f-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.090 2 DEBUG os_vif [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:84:0f,bridge_name='br-int',has_traffic_filtering=True,id=b27e7b6f-4ab7-48d9-a674-eb640289b746,network=Network(a4e44b64-c472-49fb-ac29-fcbb65fb1bdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27e7b6f-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.091 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.091 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb27e7b6f-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.095 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb27e7b6f-4a, col_values=(('external_ids', {'iface-id': 'b27e7b6f-4ab7-48d9-a674-eb640289b746', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:84:0f', 'vm-uuid': '6eada58a-d077-43e5-ab40-dd45abbe38f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:21 compute-0 NetworkManager[52324]: <info>  [1759434201.0985] manager: (tapb27e7b6f-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.104 2 INFO os_vif [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:84:0f,bridge_name='br-int',has_traffic_filtering=True,id=b27e7b6f-4ab7-48d9-a674-eb640289b746,network=Network(a4e44b64-c472-49fb-ac29-fcbb65fb1bdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27e7b6f-4a')
Oct 02 19:43:21 compute-0 kernel: tapb27e7b6f-4a: entered promiscuous mode
Oct 02 19:43:21 compute-0 systemd-udevd[259686]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:43:21 compute-0 NetworkManager[52324]: <info>  [1759434201.2089] manager: (tapb27e7b6f-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:21 compute-0 ovn_controller[97052]: 2025-10-02T19:43:21Z|00120|binding|INFO|Claiming lport b27e7b6f-4ab7-48d9-a674-eb640289b746 for this chassis.
Oct 02 19:43:21 compute-0 ovn_controller[97052]: 2025-10-02T19:43:21Z|00121|binding|INFO|b27e7b6f-4ab7-48d9-a674-eb640289b746: Claiming fa:16:3e:15:84:0f 10.100.0.3
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.218 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:84:0f 10.100.0.3'], port_security=['fa:16:3e:15:84:0f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6eada58a-d077-43e5-ab40-dd45abbe38f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d458e53358c4398b6ba6051d5c82805', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d169388-279d-4835-af73-74628348527d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61d3b384-7807-48c7-ac4b-e6e147bd5ac4, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=b27e7b6f-4ab7-48d9-a674-eb640289b746) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.220 105943 INFO neutron.agent.ovn.metadata.agent [-] Port b27e7b6f-4ab7-48d9-a674-eb640289b746 in datapath a4e44b64-c472-49fb-ac29-fcbb65fb1bdc bound to our chassis
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.222 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a4e44b64-c472-49fb-ac29-fcbb65fb1bdc
Oct 02 19:43:21 compute-0 ovn_controller[97052]: 2025-10-02T19:43:21Z|00122|binding|INFO|Setting lport b27e7b6f-4ab7-48d9-a674-eb640289b746 ovn-installed in OVS
Oct 02 19:43:21 compute-0 ovn_controller[97052]: 2025-10-02T19:43:21Z|00123|binding|INFO|Setting lport b27e7b6f-4ab7-48d9-a674-eb640289b746 up in Southbound
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.238 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[a2835471-04e7-429d-98c1-75181db71cf4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.240 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa4e44b64-c1 in ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 19:43:21 compute-0 NetworkManager[52324]: <info>  [1759434201.2402] device (tapb27e7b6f-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:43:21 compute-0 NetworkManager[52324]: <info>  [1759434201.2414] device (tapb27e7b6f-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.242 246899 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa4e44b64-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.242 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d00524-004a-4194-82c3-ae791774d699]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.243 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[fdce019c-37ad-4881-baa3-0c9c827c3d01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 systemd-machined[154795]: New machine qemu-12-instance-00000006.
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.260 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5872d9-c7c1-49bc-bdf0-60d45fd7f708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-00000006.
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.276 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[af1382b0-4b21-410a-9ba4-af27936d5f83]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.309 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[571385df-2ba8-4ce4-a3ee-368685ee3034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.316 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[7123309a-0229-48b0-87ee-d98583cfdc8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 NetworkManager[52324]: <info>  [1759434201.3177] manager: (tapa4e44b64-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.358 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[414389ed-5dd7-441e-8667-8cd1d4ba5089]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.361 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[c321d080-b8a2-4453-aa28-3fa031699d1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 NetworkManager[52324]: <info>  [1759434201.3878] device (tapa4e44b64-c0): carrier: link connected
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.395 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[35b492cd-622d-4761-9d3d-f1c7994fb80a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.415 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[f851cccd-24a6-4dc7-b967-6c6e30d26e00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa4e44b64-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c2:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 536201, 'reachable_time': 29634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259826, 'error': None, 'target': 'ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.434 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a04f54-d889-4aac-91ab-98184d4ce144]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:c2db'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 536201, 'tstamp': 536201}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259827, 'error': None, 'target': 'ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.452 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[b86bf8d6-9685-4de6-bb46-68025e9e805b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa4e44b64-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:c2:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 536201, 'reachable_time': 29634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259828, 'error': None, 'target': 'ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.485 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[682e494d-202a-4761-ba7c-8ac3675dc5e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.501 2 DEBUG nova.compute.manager [req-11bdfe01-9c76-4711-adc1-6ddda006b9f0 req-4c1100f1-bcf8-4b0d-bc42-749f6d03ba2e fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Received event network-vif-plugged-45b53db0-b1f5-401e-8a98-c127ada04a9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.502 2 DEBUG oslo_concurrency.lockutils [req-11bdfe01-9c76-4711-adc1-6ddda006b9f0 req-4c1100f1-bcf8-4b0d-bc42-749f6d03ba2e fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.502 2 DEBUG oslo_concurrency.lockutils [req-11bdfe01-9c76-4711-adc1-6ddda006b9f0 req-4c1100f1-bcf8-4b0d-bc42-749f6d03ba2e fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.502 2 DEBUG oslo_concurrency.lockutils [req-11bdfe01-9c76-4711-adc1-6ddda006b9f0 req-4c1100f1-bcf8-4b0d-bc42-749f6d03ba2e fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.502 2 DEBUG nova.compute.manager [req-11bdfe01-9c76-4711-adc1-6ddda006b9f0 req-4c1100f1-bcf8-4b0d-bc42-749f6d03ba2e fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Processing event network-vif-plugged-45b53db0-b1f5-401e-8a98-c127ada04a9c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.503 2 DEBUG nova.compute.manager [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.507 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434201.5072598, f0ac40ea-f3c9-4981-ba99-bfbf34bd253a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.507 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] VM Resumed (Lifecycle Event)
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.525 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.530 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.533 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.538 2 INFO nova.virt.libvirt.driver [-] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Instance spawned successfully.
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.538 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.561 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.573 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.574 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.574 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.575 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.575 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.576 2 DEBUG nova.virt.libvirt.driver [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.577 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[949f6e1a-3016-4372-b742-c7af6d2a6a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.578 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4e44b64-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.578 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.578 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4e44b64-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:21 compute-0 NetworkManager[52324]: <info>  [1759434201.5817] manager: (tapa4e44b64-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Oct 02 19:43:21 compute-0 kernel: tapa4e44b64-c0: entered promiscuous mode
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.585 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa4e44b64-c0, col_values=(('external_ids', {'iface-id': 'bd80466a-6146-45a7-be35-ec332e1ee93c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:21 compute-0 ovn_controller[97052]: 2025-10-02T19:43:21Z|00124|binding|INFO|Releasing lport bd80466a-6146-45a7-be35-ec332e1ee93c from this chassis (sb_readonly=0)
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.592 105943 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a4e44b64-c472-49fb-ac29-fcbb65fb1bdc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a4e44b64-c472-49fb-ac29-fcbb65fb1bdc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.602 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[9871bf08-76db-449d-a957-39a16f591ecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.602 105943 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: global
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     log         /dev/log local0 debug
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     log-tag     haproxy-metadata-proxy-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     user        root
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     group       root
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     maxconn     1024
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     pidfile     /var/lib/neutron/external/pids/a4e44b64-c472-49fb-ac29-fcbb65fb1bdc.pid.haproxy
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     daemon
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: defaults
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     log global
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     mode http
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     option httplog
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     option dontlognull
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     option http-server-close
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     option forwardfor
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     retries                 3
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     timeout http-request    30s
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     timeout connect         30s
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     timeout client          32s
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     timeout server          32s
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     timeout http-keep-alive 30s
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: listen listener
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     bind 169.254.169.254:80
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:     http-request add-header X-OVN-Network-ID a4e44b64-c472-49fb-ac29-fcbb65fb1bdc
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 19:43:21 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:21.603 105943 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'env', 'PROCESS_TAG=haproxy-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a4e44b64-c472-49fb-ac29-fcbb65fb1bdc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.633 2 INFO nova.compute.manager [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Took 16.12 seconds to spawn the instance on the hypervisor.
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.633 2 DEBUG nova.compute.manager [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.713 2 INFO nova.compute.manager [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Took 16.71 seconds to build instance.
Oct 02 19:43:21 compute-0 podman[259835]: 2025-10-02 19:43:21.729564987 +0000 UTC m=+0.099458353 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:43:21 compute-0 nova_compute[194781]: 2025-10-02 19:43:21.729 2 DEBUG oslo_concurrency.lockutils [None req-13b49864-5326-483b-9522-d630521c3708 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:21 compute-0 podman[259837]: 2025-10-02 19:43:21.784320367 +0000 UTC m=+0.154171692 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:43:22 compute-0 podman[259899]: 2025-10-02 19:43:22.029270149 +0000 UTC m=+0.063115184 container create b9d1afd7a25754b83a93a09396d7d2708a471b1b6f84ac2d5ebd824ca4a9eb08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:43:22 compute-0 podman[259899]: 2025-10-02 19:43:21.989805916 +0000 UTC m=+0.023650951 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:43:22 compute-0 systemd[1]: Started libpod-conmon-b9d1afd7a25754b83a93a09396d7d2708a471b1b6f84ac2d5ebd824ca4a9eb08.scope.
Oct 02 19:43:22 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea75340e71a1354d0e5c28e17cad6e03113255803dbcf679ca2f2b22332b1fcf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 19:43:22 compute-0 podman[259899]: 2025-10-02 19:43:22.158807703 +0000 UTC m=+0.192652728 container init b9d1afd7a25754b83a93a09396d7d2708a471b1b6f84ac2d5ebd824ca4a9eb08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 19:43:22 compute-0 podman[259899]: 2025-10-02 19:43:22.170800932 +0000 UTC m=+0.204645957 container start b9d1afd7a25754b83a93a09396d7d2708a471b1b6f84ac2d5ebd824ca4a9eb08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:43:22 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[259912]: [NOTICE]   (259922) : New worker (259925) forked
Oct 02 19:43:22 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[259912]: [NOTICE]   (259922) : Loading success.
Oct 02 19:43:22 compute-0 nova_compute[194781]: 2025-10-02 19:43:22.827 2 DEBUG nova.virt.libvirt.host [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Removed pending event for 6eada58a-d077-43e5-ab40-dd45abbe38f3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 19:43:22 compute-0 nova_compute[194781]: 2025-10-02 19:43:22.827 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434202.8268263, 6eada58a-d077-43e5-ab40-dd45abbe38f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:43:22 compute-0 nova_compute[194781]: 2025-10-02 19:43:22.828 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] VM Resumed (Lifecycle Event)
Oct 02 19:43:22 compute-0 nova_compute[194781]: 2025-10-02 19:43:22.829 2 DEBUG nova.compute.manager [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:43:22 compute-0 nova_compute[194781]: 2025-10-02 19:43:22.834 2 INFO nova.virt.libvirt.driver [-] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Instance rebooted successfully.
Oct 02 19:43:22 compute-0 nova_compute[194781]: 2025-10-02 19:43:22.834 2 DEBUG nova.compute.manager [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:43:22 compute-0 nova_compute[194781]: 2025-10-02 19:43:22.852 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:43:22 compute-0 nova_compute[194781]: 2025-10-02 19:43:22.861 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:43:22 compute-0 nova_compute[194781]: 2025-10-02 19:43:22.900 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 02 19:43:22 compute-0 nova_compute[194781]: 2025-10-02 19:43:22.900 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434202.8280182, 6eada58a-d077-43e5-ab40-dd45abbe38f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:43:22 compute-0 nova_compute[194781]: 2025-10-02 19:43:22.900 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] VM Started (Lifecycle Event)
Oct 02 19:43:22 compute-0 nova_compute[194781]: 2025-10-02 19:43:22.939 2 DEBUG oslo_concurrency.lockutils [None req-456be78e-79e1-47d5-b4d2-10f9f2ad5ca8 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:22 compute-0 nova_compute[194781]: 2025-10-02 19:43:22.942 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:43:22 compute-0 nova_compute[194781]: 2025-10-02 19:43:22.948 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.173 2 DEBUG nova.compute.manager [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Received event network-vif-plugged-45b53db0-b1f5-401e-8a98-c127ada04a9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.173 2 DEBUG oslo_concurrency.lockutils [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.173 2 DEBUG oslo_concurrency.lockutils [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.174 2 DEBUG oslo_concurrency.lockutils [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.174 2 DEBUG nova.compute.manager [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] No waiting events found dispatching network-vif-plugged-45b53db0-b1f5-401e-8a98-c127ada04a9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.174 2 WARNING nova.compute.manager [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Received unexpected event network-vif-plugged-45b53db0-b1f5-401e-8a98-c127ada04a9c for instance with vm_state active and task_state None.
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.174 2 DEBUG nova.compute.manager [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received event network-vif-unplugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.175 2 DEBUG oslo_concurrency.lockutils [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.175 2 DEBUG oslo_concurrency.lockutils [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.175 2 DEBUG oslo_concurrency.lockutils [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.175 2 DEBUG nova.compute.manager [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] No waiting events found dispatching network-vif-unplugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.175 2 WARNING nova.compute.manager [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received unexpected event network-vif-unplugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 for instance with vm_state active and task_state None.
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.176 2 DEBUG nova.compute.manager [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received event network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.176 2 DEBUG oslo_concurrency.lockutils [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.176 2 DEBUG oslo_concurrency.lockutils [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.176 2 DEBUG oslo_concurrency.lockutils [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.177 2 DEBUG nova.compute.manager [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] No waiting events found dispatching network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.177 2 WARNING nova.compute.manager [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received unexpected event network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 for instance with vm_state active and task_state None.
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.177 2 DEBUG nova.compute.manager [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received event network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.177 2 DEBUG oslo_concurrency.lockutils [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.177 2 DEBUG oslo_concurrency.lockutils [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.178 2 DEBUG oslo_concurrency.lockutils [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.178 2 DEBUG nova.compute.manager [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] No waiting events found dispatching network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.178 2 WARNING nova.compute.manager [req-4847782f-36a6-4c5b-8c82-5736a4f19758 req-c37c807b-e633-48e4-bc09-695697e91822 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received unexpected event network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 for instance with vm_state active and task_state None.
Oct 02 19:43:24 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:24.939 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:43:24 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:24.940 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:43:24 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:24.940 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:43:24 compute-0 nova_compute[194781]: 2025-10-02 19:43:24.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:25 compute-0 ovn_controller[97052]: 2025-10-02T19:43:25Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:81:87:ef 10.100.0.12
Oct 02 19:43:25 compute-0 ovn_controller[97052]: 2025-10-02T19:43:25Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:87:ef 10.100.0.12
Oct 02 19:43:25 compute-0 nova_compute[194781]: 2025-10-02 19:43:25.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:26 compute-0 nova_compute[194781]: 2025-10-02 19:43:26.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:26 compute-0 nova_compute[194781]: 2025-10-02 19:43:26.378 2 DEBUG nova.compute.manager [req-ad1939a9-6ed4-4e19-a67b-1d7c77835a6d req-6ef93aed-12a7-4820-af6d-f21977cdceae fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received event network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:43:26 compute-0 nova_compute[194781]: 2025-10-02 19:43:26.379 2 DEBUG oslo_concurrency.lockutils [req-ad1939a9-6ed4-4e19-a67b-1d7c77835a6d req-6ef93aed-12a7-4820-af6d-f21977cdceae fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:26 compute-0 nova_compute[194781]: 2025-10-02 19:43:26.379 2 DEBUG oslo_concurrency.lockutils [req-ad1939a9-6ed4-4e19-a67b-1d7c77835a6d req-6ef93aed-12a7-4820-af6d-f21977cdceae fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:26 compute-0 nova_compute[194781]: 2025-10-02 19:43:26.379 2 DEBUG oslo_concurrency.lockutils [req-ad1939a9-6ed4-4e19-a67b-1d7c77835a6d req-6ef93aed-12a7-4820-af6d-f21977cdceae fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:26 compute-0 nova_compute[194781]: 2025-10-02 19:43:26.380 2 DEBUG nova.compute.manager [req-ad1939a9-6ed4-4e19-a67b-1d7c77835a6d req-6ef93aed-12a7-4820-af6d-f21977cdceae fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] No waiting events found dispatching network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:43:26 compute-0 nova_compute[194781]: 2025-10-02 19:43:26.380 2 WARNING nova.compute.manager [req-ad1939a9-6ed4-4e19-a67b-1d7c77835a6d req-6ef93aed-12a7-4820-af6d-f21977cdceae fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received unexpected event network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 for instance with vm_state active and task_state None.
Oct 02 19:43:26 compute-0 nova_compute[194781]: 2025-10-02 19:43:26.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:29 compute-0 nova_compute[194781]: 2025-10-02 19:43:29.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:29 compute-0 podman[209015]: time="2025-10-02T19:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:43:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35680 "" "Go-http-client/1.1"
Oct 02 19:43:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6608 "" "Go-http-client/1.1"
Oct 02 19:43:29 compute-0 podman[259943]: 2025-10-02 19:43:29.789543889 +0000 UTC m=+0.136689326 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:43:30 compute-0 ovn_controller[97052]: 2025-10-02T19:43:30Z|00125|memory|INFO|peak resident set size grew 52% in last 2535.4 seconds, from 16128 kB to 24548 kB
Oct 02 19:43:30 compute-0 ovn_controller[97052]: 2025-10-02T19:43:30Z|00126|memory|INFO|idl-cells-OVN_Southbound:11235 idl-cells-Open_vSwitch:1041 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:401 lflow-cache-entries-cache-matches:298 lflow-cache-size-KB:1574 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:732 ofctrl_installed_flow_usage-KB:535 ofctrl_sb_flow_ref_usage-KB:274
Oct 02 19:43:30 compute-0 nova_compute[194781]: 2025-10-02 19:43:30.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:31 compute-0 nova_compute[194781]: 2025-10-02 19:43:31.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:31 compute-0 openstack_network_exporter[211160]: ERROR   19:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:43:31 compute-0 openstack_network_exporter[211160]: ERROR   19:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:43:31 compute-0 openstack_network_exporter[211160]: ERROR   19:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:43:31 compute-0 openstack_network_exporter[211160]: ERROR   19:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:43:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:43:31 compute-0 openstack_network_exporter[211160]: ERROR   19:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:43:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:43:33 compute-0 nova_compute[194781]: 2025-10-02 19:43:33.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:34 compute-0 nova_compute[194781]: 2025-10-02 19:43:34.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:34 compute-0 nova_compute[194781]: 2025-10-02 19:43:34.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:43:35 compute-0 ovn_controller[97052]: 2025-10-02T19:43:35Z|00127|binding|INFO|Releasing lport bd80466a-6146-45a7-be35-ec332e1ee93c from this chassis (sb_readonly=0)
Oct 02 19:43:35 compute-0 ovn_controller[97052]: 2025-10-02T19:43:35Z|00128|binding|INFO|Releasing lport aaa6ea3c-0164-44d4-b435-0c6c04e73e3f from this chassis (sb_readonly=0)
Oct 02 19:43:35 compute-0 ovn_controller[97052]: 2025-10-02T19:43:35Z|00129|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:43:35 compute-0 ovn_controller[97052]: 2025-10-02T19:43:35Z|00130|binding|INFO|Releasing lport 5a048b67-2936-4fb1-8322-b03194cd7ecb from this chassis (sb_readonly=0)
Oct 02 19:43:35 compute-0 nova_compute[194781]: 2025-10-02 19:43:35.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:35 compute-0 nova_compute[194781]: 2025-10-02 19:43:35.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:36 compute-0 nova_compute[194781]: 2025-10-02 19:43:36.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:36 compute-0 nova_compute[194781]: 2025-10-02 19:43:36.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.068 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.068 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.069 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.069 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.156 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.221 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.222 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.282 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.290 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.351 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.353 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.467 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.480 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.571 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.573 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.641 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.644 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.714 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.717 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.847 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.859 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.957 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:37 compute-0 nova_compute[194781]: 2025-10-02 19:43:37.960 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.060 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.515 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.517 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4611MB free_disk=72.35270690917969GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.517 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.518 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.635 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.635 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 6eada58a-d077-43e5-ab40-dd45abbe38f3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.635 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance fd018206-5b5d-4759-8481-a7dd68c01a2e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.635 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.636 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.636 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=79GB used_disk=5GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:43:38 compute-0 podman[259999]: 2025-10-02 19:43:38.715632006 +0000 UTC m=+0.090237117 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 19:43:38 compute-0 podman[260000]: 2025-10-02 19:43:38.731336234 +0000 UTC m=+0.101287862 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.787 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.808 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.829 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:43:38 compute-0 nova_compute[194781]: 2025-10-02 19:43:38.829 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:39 compute-0 nova_compute[194781]: 2025-10-02 19:43:39.830 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:40 compute-0 nova_compute[194781]: 2025-10-02 19:43:40.031 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:40 compute-0 ovn_controller[97052]: 2025-10-02T19:43:40Z|00131|binding|INFO|Releasing lport bd80466a-6146-45a7-be35-ec332e1ee93c from this chassis (sb_readonly=0)
Oct 02 19:43:40 compute-0 ovn_controller[97052]: 2025-10-02T19:43:40Z|00132|binding|INFO|Releasing lport aaa6ea3c-0164-44d4-b435-0c6c04e73e3f from this chassis (sb_readonly=0)
Oct 02 19:43:40 compute-0 ovn_controller[97052]: 2025-10-02T19:43:40Z|00133|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:43:40 compute-0 ovn_controller[97052]: 2025-10-02T19:43:40Z|00134|binding|INFO|Releasing lport 5a048b67-2936-4fb1-8322-b03194cd7ecb from this chassis (sb_readonly=0)
Oct 02 19:43:40 compute-0 nova_compute[194781]: 2025-10-02 19:43:40.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:40 compute-0 nova_compute[194781]: 2025-10-02 19:43:40.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:40 compute-0 nova_compute[194781]: 2025-10-02 19:43:40.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:41 compute-0 nova_compute[194781]: 2025-10-02 19:43:41.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:41 compute-0 podman[260034]: 2025-10-02 19:43:41.745096865 +0000 UTC m=+0.107687303 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Oct 02 19:43:41 compute-0 podman[260035]: 2025-10-02 19:43:41.758830731 +0000 UTC m=+0.121419979 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, io.openshift.expose-services=, release-0.7.12=, io.buildah.version=1.29.0, architecture=x86_64, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:43:41 compute-0 podman[260036]: 2025-10-02 19:43:41.781266539 +0000 UTC m=+0.129652888 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 19:43:45 compute-0 nova_compute[194781]: 2025-10-02 19:43:45.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:45 compute-0 nova_compute[194781]: 2025-10-02 19:43:45.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:46 compute-0 nova_compute[194781]: 2025-10-02 19:43:46.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:43:46 compute-0 nova_compute[194781]: 2025-10-02 19:43:46.037 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:43:46 compute-0 nova_compute[194781]: 2025-10-02 19:43:46.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:46 compute-0 nova_compute[194781]: 2025-10-02 19:43:46.397 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:43:46 compute-0 nova_compute[194781]: 2025-10-02 19:43:46.397 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:43:46 compute-0 nova_compute[194781]: 2025-10-02 19:43:46.397 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:43:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:47.489 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:43:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:47.490 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:43:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:43:47.491 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:43:48 compute-0 nova_compute[194781]: 2025-10-02 19:43:48.241 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Updating instance_info_cache with network_info: [{"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:43:48 compute-0 nova_compute[194781]: 2025-10-02 19:43:48.267 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-6eada58a-d077-43e5-ab40-dd45abbe38f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:43:48 compute-0 nova_compute[194781]: 2025-10-02 19:43:48.268 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:43:49 compute-0 podman[260090]: 2025-10-02 19:43:49.751601749 +0000 UTC m=+0.128576870 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:43:49 compute-0 podman[260091]: 2025-10-02 19:43:49.764953195 +0000 UTC m=+0.125418015 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 19:43:49 compute-0 nova_compute[194781]: 2025-10-02 19:43:49.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:50 compute-0 nova_compute[194781]: 2025-10-02 19:43:50.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:51 compute-0 nova_compute[194781]: 2025-10-02 19:43:51.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:52 compute-0 podman[260145]: 2025-10-02 19:43:52.733857369 +0000 UTC m=+0.099904245 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:43:52 compute-0 podman[260146]: 2025-10-02 19:43:52.777610486 +0000 UTC m=+0.133669135 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:43:53 compute-0 nova_compute[194781]: 2025-10-02 19:43:53.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:54 compute-0 ovn_controller[97052]: 2025-10-02T19:43:54Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:c6:bd 10.100.2.28
Oct 02 19:43:54 compute-0 ovn_controller[97052]: 2025-10-02T19:43:54Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:c6:bd 10.100.2.28
Oct 02 19:43:54 compute-0 nova_compute[194781]: 2025-10-02 19:43:54.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:55 compute-0 nova_compute[194781]: 2025-10-02 19:43:55.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:56 compute-0 nova_compute[194781]: 2025-10-02 19:43:56.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:43:56 compute-0 ovn_controller[97052]: 2025-10-02T19:43:56Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:84:0f 10.100.0.3
Oct 02 19:43:59 compute-0 nova_compute[194781]: 2025-10-02 19:43:59.617 2 DEBUG nova.objects.instance [None req-5e6be2e9-cbbe-49fd-8b20-3b05abc3dbb2 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lazy-loading 'flavor' on Instance uuid fd018206-5b5d-4759-8481-a7dd68c01a2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:43:59 compute-0 nova_compute[194781]: 2025-10-02 19:43:59.661 2 DEBUG oslo_concurrency.lockutils [None req-5e6be2e9-cbbe-49fd-8b20-3b05abc3dbb2 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquiring lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:43:59 compute-0 nova_compute[194781]: 2025-10-02 19:43:59.661 2 DEBUG oslo_concurrency.lockutils [None req-5e6be2e9-cbbe-49fd-8b20-3b05abc3dbb2 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquired lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:43:59 compute-0 podman[209015]: time="2025-10-02T19:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:43:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35680 "" "Go-http-client/1.1"
Oct 02 19:43:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6616 "" "Go-http-client/1.1"
Oct 02 19:44:00 compute-0 ovn_controller[97052]: 2025-10-02T19:44:00Z|00135|binding|INFO|Releasing lport bd80466a-6146-45a7-be35-ec332e1ee93c from this chassis (sb_readonly=0)
Oct 02 19:44:00 compute-0 ovn_controller[97052]: 2025-10-02T19:44:00Z|00136|binding|INFO|Releasing lport aaa6ea3c-0164-44d4-b435-0c6c04e73e3f from this chassis (sb_readonly=0)
Oct 02 19:44:00 compute-0 ovn_controller[97052]: 2025-10-02T19:44:00Z|00137|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:44:00 compute-0 ovn_controller[97052]: 2025-10-02T19:44:00Z|00138|binding|INFO|Releasing lport 5a048b67-2936-4fb1-8322-b03194cd7ecb from this chassis (sb_readonly=0)
Oct 02 19:44:00 compute-0 nova_compute[194781]: 2025-10-02 19:44:00.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:00 compute-0 nova_compute[194781]: 2025-10-02 19:44:00.662 2 DEBUG nova.network.neutron [None req-5e6be2e9-cbbe-49fd-8b20-3b05abc3dbb2 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:44:00 compute-0 podman[260193]: 2025-10-02 19:44:00.719142509 +0000 UTC m=+0.093293339 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:44:00 compute-0 nova_compute[194781]: 2025-10-02 19:44:00.803 2 DEBUG nova.compute.manager [req-e85ad321-49f3-475a-b3d4-746df6f85a39 req-80024a49-9c05-4a15-bc01-1f6dfaa87658 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Received event network-changed-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:00 compute-0 nova_compute[194781]: 2025-10-02 19:44:00.804 2 DEBUG nova.compute.manager [req-e85ad321-49f3-475a-b3d4-746df6f85a39 req-80024a49-9c05-4a15-bc01-1f6dfaa87658 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Refreshing instance network info cache due to event network-changed-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:44:00 compute-0 nova_compute[194781]: 2025-10-02 19:44:00.804 2 DEBUG oslo_concurrency.lockutils [req-e85ad321-49f3-475a-b3d4-746df6f85a39 req-80024a49-9c05-4a15-bc01-1f6dfaa87658 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:44:00 compute-0 nova_compute[194781]: 2025-10-02 19:44:00.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:01 compute-0 nova_compute[194781]: 2025-10-02 19:44:01.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:01 compute-0 nova_compute[194781]: 2025-10-02 19:44:01.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:01 compute-0 openstack_network_exporter[211160]: ERROR   19:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:44:01 compute-0 openstack_network_exporter[211160]: ERROR   19:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:44:01 compute-0 openstack_network_exporter[211160]: ERROR   19:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:44:01 compute-0 openstack_network_exporter[211160]: ERROR   19:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:44:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:44:01 compute-0 openstack_network_exporter[211160]: ERROR   19:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:44:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:44:02 compute-0 nova_compute[194781]: 2025-10-02 19:44:02.139 2 DEBUG nova.network.neutron [None req-5e6be2e9-cbbe-49fd-8b20-3b05abc3dbb2 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Updating instance_info_cache with network_info: [{"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:02 compute-0 nova_compute[194781]: 2025-10-02 19:44:02.164 2 DEBUG oslo_concurrency.lockutils [None req-5e6be2e9-cbbe-49fd-8b20-3b05abc3dbb2 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Releasing lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:44:02 compute-0 nova_compute[194781]: 2025-10-02 19:44:02.165 2 DEBUG nova.compute.manager [None req-5e6be2e9-cbbe-49fd-8b20-3b05abc3dbb2 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 02 19:44:02 compute-0 nova_compute[194781]: 2025-10-02 19:44:02.166 2 DEBUG nova.compute.manager [None req-5e6be2e9-cbbe-49fd-8b20-3b05abc3dbb2 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] network_info to inject: |[{"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 02 19:44:02 compute-0 nova_compute[194781]: 2025-10-02 19:44:02.169 2 DEBUG oslo_concurrency.lockutils [req-e85ad321-49f3-475a-b3d4-746df6f85a39 req-80024a49-9c05-4a15-bc01-1f6dfaa87658 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:44:02 compute-0 nova_compute[194781]: 2025-10-02 19:44:02.170 2 DEBUG nova.network.neutron [req-e85ad321-49f3-475a-b3d4-746df6f85a39 req-80024a49-9c05-4a15-bc01-1f6dfaa87658 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Refreshing network info cache for port 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:44:04 compute-0 nova_compute[194781]: 2025-10-02 19:44:04.955 2 DEBUG nova.objects.instance [None req-9ca3b4aa-c758-4cf6-8e37-ab4585fdb8b5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lazy-loading 'flavor' on Instance uuid fd018206-5b5d-4759-8481-a7dd68c01a2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:44:05 compute-0 nova_compute[194781]: 2025-10-02 19:44:05.004 2 DEBUG oslo_concurrency.lockutils [None req-9ca3b4aa-c758-4cf6-8e37-ab4585fdb8b5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquiring lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:44:05 compute-0 nova_compute[194781]: 2025-10-02 19:44:05.466 2 DEBUG nova.network.neutron [req-e85ad321-49f3-475a-b3d4-746df6f85a39 req-80024a49-9c05-4a15-bc01-1f6dfaa87658 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Updated VIF entry in instance network info cache for port 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:44:05 compute-0 nova_compute[194781]: 2025-10-02 19:44:05.467 2 DEBUG nova.network.neutron [req-e85ad321-49f3-475a-b3d4-746df6f85a39 req-80024a49-9c05-4a15-bc01-1f6dfaa87658 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Updating instance_info_cache with network_info: [{"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:05 compute-0 nova_compute[194781]: 2025-10-02 19:44:05.483 2 DEBUG oslo_concurrency.lockutils [req-e85ad321-49f3-475a-b3d4-746df6f85a39 req-80024a49-9c05-4a15-bc01-1f6dfaa87658 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:44:05 compute-0 nova_compute[194781]: 2025-10-02 19:44:05.484 2 DEBUG oslo_concurrency.lockutils [None req-9ca3b4aa-c758-4cf6-8e37-ab4585fdb8b5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquired lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:44:05 compute-0 nova_compute[194781]: 2025-10-02 19:44:05.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:06 compute-0 nova_compute[194781]: 2025-10-02 19:44:06.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:06 compute-0 nova_compute[194781]: 2025-10-02 19:44:06.476 2 DEBUG nova.network.neutron [None req-9ca3b4aa-c758-4cf6-8e37-ab4585fdb8b5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:44:06 compute-0 nova_compute[194781]: 2025-10-02 19:44:06.597 2 DEBUG nova.compute.manager [req-d0e12080-39d6-422a-8ed7-54949569e380 req-992a4bf0-a183-4ae0-8334-a93a1e088036 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Received event network-changed-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:06 compute-0 nova_compute[194781]: 2025-10-02 19:44:06.598 2 DEBUG nova.compute.manager [req-d0e12080-39d6-422a-8ed7-54949569e380 req-992a4bf0-a183-4ae0-8334-a93a1e088036 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Refreshing instance network info cache due to event network-changed-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:44:06 compute-0 nova_compute[194781]: 2025-10-02 19:44:06.599 2 DEBUG oslo_concurrency.lockutils [req-d0e12080-39d6-422a-8ed7-54949569e380 req-992a4bf0-a183-4ae0-8334-a93a1e088036 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:44:08 compute-0 nova_compute[194781]: 2025-10-02 19:44:08.488 2 DEBUG nova.network.neutron [None req-9ca3b4aa-c758-4cf6-8e37-ab4585fdb8b5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Updating instance_info_cache with network_info: [{"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:08 compute-0 nova_compute[194781]: 2025-10-02 19:44:08.513 2 DEBUG oslo_concurrency.lockutils [None req-9ca3b4aa-c758-4cf6-8e37-ab4585fdb8b5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Releasing lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:44:08 compute-0 nova_compute[194781]: 2025-10-02 19:44:08.514 2 DEBUG nova.compute.manager [None req-9ca3b4aa-c758-4cf6-8e37-ab4585fdb8b5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 02 19:44:08 compute-0 nova_compute[194781]: 2025-10-02 19:44:08.514 2 DEBUG nova.compute.manager [None req-9ca3b4aa-c758-4cf6-8e37-ab4585fdb8b5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] network_info to inject: |[{"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 02 19:44:08 compute-0 nova_compute[194781]: 2025-10-02 19:44:08.517 2 DEBUG oslo_concurrency.lockutils [req-d0e12080-39d6-422a-8ed7-54949569e380 req-992a4bf0-a183-4ae0-8334-a93a1e088036 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:44:08 compute-0 nova_compute[194781]: 2025-10-02 19:44:08.517 2 DEBUG nova.network.neutron [req-d0e12080-39d6-422a-8ed7-54949569e380 req-992a4bf0-a183-4ae0-8334-a93a1e088036 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Refreshing network info cache for port 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:44:09 compute-0 podman[260218]: 2025-10-02 19:44:09.744281477 +0000 UTC m=+0.109413348 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_managed=true)
Oct 02 19:44:09 compute-0 podman[260217]: 2025-10-02 19:44:09.782990959 +0000 UTC m=+0.144316609 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct 02 19:44:09 compute-0 nova_compute[194781]: 2025-10-02 19:44:09.986 2 DEBUG nova.network.neutron [req-d0e12080-39d6-422a-8ed7-54949569e380 req-992a4bf0-a183-4ae0-8334-a93a1e088036 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Updated VIF entry in instance network info cache for port 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:44:09 compute-0 nova_compute[194781]: 2025-10-02 19:44:09.989 2 DEBUG nova.network.neutron [req-d0e12080-39d6-422a-8ed7-54949569e380 req-992a4bf0-a183-4ae0-8334-a93a1e088036 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Updating instance_info_cache with network_info: [{"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.009 2 DEBUG oslo_concurrency.lockutils [req-d0e12080-39d6-422a-8ed7-54949569e380 req-992a4bf0-a183-4ae0-8334-a93a1e088036 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-fd018206-5b5d-4759-8481-a7dd68c01a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.574 2 DEBUG oslo_concurrency.lockutils [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquiring lock "fd018206-5b5d-4759-8481-a7dd68c01a2e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.576 2 DEBUG oslo_concurrency.lockutils [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.577 2 DEBUG oslo_concurrency.lockutils [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquiring lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.578 2 DEBUG oslo_concurrency.lockutils [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.579 2 DEBUG oslo_concurrency.lockutils [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.581 2 INFO nova.compute.manager [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Terminating instance
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.583 2 DEBUG nova.compute.manager [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:44:10 compute-0 kernel: tap93a8e2fd-ae (unregistering): left promiscuous mode
Oct 02 19:44:10 compute-0 ovn_controller[97052]: 2025-10-02T19:44:10Z|00139|binding|INFO|Releasing lport 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 from this chassis (sb_readonly=0)
Oct 02 19:44:10 compute-0 ovn_controller[97052]: 2025-10-02T19:44:10Z|00140|binding|INFO|Setting lport 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 down in Southbound
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:10 compute-0 NetworkManager[52324]: <info>  [1759434250.6648] device (tap93a8e2fd-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:44:10 compute-0 ovn_controller[97052]: 2025-10-02T19:44:10Z|00141|binding|INFO|Removing iface tap93a8e2fd-ae ovn-installed in OVS
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:10 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:10.698 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:87:ef 10.100.0.12'], port_security=['fa:16:3e:81:87:ef 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fd018206-5b5d-4759-8481-a7dd68c01a2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c07a9d85-90af-47c3-a2ed-3103aaadb7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfa01cf9d3eb4388bef0e350af472762', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9a5e2c76-a0b6-479b-a7e1-ac8a5e2ef609', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac8c83c5-af49-454a-8773-e23c66675f28, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:44:10 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:10.700 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 in datapath c07a9d85-90af-47c3-a2ed-3103aaadb7da unbound from our chassis
Oct 02 19:44:10 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:10.702 105943 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c07a9d85-90af-47c3-a2ed-3103aaadb7da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 19:44:10 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:10.704 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[dc30d4d8-18cd-4b2f-adfb-e1dc767ca45d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:10 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:10.704 105943 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da namespace which is not needed anymore
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:10 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 02 19:44:10 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000a.scope: Consumed 41.107s CPU time.
Oct 02 19:44:10 compute-0 systemd-machined[154795]: Machine qemu-9-instance-0000000a terminated.
Oct 02 19:44:10 compute-0 kernel: tap93a8e2fd-ae: entered promiscuous mode
Oct 02 19:44:10 compute-0 NetworkManager[52324]: <info>  [1759434250.8151] manager: (tap93a8e2fd-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Oct 02 19:44:10 compute-0 systemd-udevd[260256]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:44:10 compute-0 kernel: tap93a8e2fd-ae (unregistering): left promiscuous mode
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.885 2 INFO nova.virt.libvirt.driver [-] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Instance destroyed successfully.
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.889 2 DEBUG nova.objects.instance [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lazy-loading 'resources' on Instance uuid fd018206-5b5d-4759-8481-a7dd68c01a2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.912 2 DEBUG nova.virt.libvirt.vif [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:42:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1258340398',display_name='tempest-AttachInterfacesUnderV243Test-server-1258340398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1258340398',id=10,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAi8r74NAihki+Wu7/WVf2EpMRRpAad1pvOJ9n7X7dtUA3wA81PPkz4CDNLV0PKBV+vfeT6ZKEwNa2p45q2P6JovkirP8zmol2nXt3bF1GLnxW946byUaEp1P161J+2sXQ==',key_name='tempest-keypair-1402289596',keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:42:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfa01cf9d3eb4388bef0e350af472762',ramdisk_id='',reservation_id='r-8g54kq9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1074896381',owner_user_name='tempest-AttachInterfacesUnderV243Test-1074896381-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:44:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5d286f2c6fa49b2bded7a673c5a9d52',uuid=fd018206-5b5d-4759-8481-a7dd68c01a2e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.913 2 DEBUG nova.network.os_vif_util [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Converting VIF {"id": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "address": "fa:16:3e:81:87:ef", "network": {"id": "c07a9d85-90af-47c3-a2ed-3103aaadb7da", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1217982617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfa01cf9d3eb4388bef0e350af472762", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93a8e2fd-ae", "ovs_interfaceid": "93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.915 2 DEBUG nova.network.os_vif_util [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:87:ef,bridge_name='br-int',has_traffic_filtering=True,id=93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66,network=Network(c07a9d85-90af-47c3-a2ed-3103aaadb7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93a8e2fd-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.917 2 DEBUG os_vif [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:87:ef,bridge_name='br-int',has_traffic_filtering=True,id=93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66,network=Network(c07a9d85-90af-47c3-a2ed-3103aaadb7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93a8e2fd-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.923 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93a8e2fd-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.936 2 INFO os_vif [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:87:ef,bridge_name='br-int',has_traffic_filtering=True,id=93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66,network=Network(c07a9d85-90af-47c3-a2ed-3103aaadb7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93a8e2fd-ae')
Oct 02 19:44:10 compute-0 neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da[259150]: [NOTICE]   (259154) : haproxy version is 2.8.14-c23fe91
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.938 2 INFO nova.virt.libvirt.driver [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Deleting instance files /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e_del
Oct 02 19:44:10 compute-0 neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da[259150]: [NOTICE]   (259154) : path to executable is /usr/sbin/haproxy
Oct 02 19:44:10 compute-0 neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da[259150]: [WARNING]  (259154) : Exiting Master process...
Oct 02 19:44:10 compute-0 neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da[259150]: [ALERT]    (259154) : Current worker (259156) exited with code 143 (Terminated)
Oct 02 19:44:10 compute-0 neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da[259150]: [WARNING]  (259154) : All workers exited. Exiting... (0)
Oct 02 19:44:10 compute-0 podman[260280]: 2025-10-02 19:44:10.95352593 +0000 UTC m=+0.085255785 container died 1d92da7ae4fe38534e0d4bc7f582bfd88c880bfd9daeb4246489b649a6c34be3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:44:10 compute-0 systemd[1]: libpod-1d92da7ae4fe38534e0d4bc7f582bfd88c880bfd9daeb4246489b649a6c34be3.scope: Deactivated successfully.
Oct 02 19:44:10 compute-0 nova_compute[194781]: 2025-10-02 19:44:10.953 2 INFO nova.virt.libvirt.driver [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Deletion of /var/lib/nova/instances/fd018206-5b5d-4759-8481-a7dd68c01a2e_del complete
Oct 02 19:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d92da7ae4fe38534e0d4bc7f582bfd88c880bfd9daeb4246489b649a6c34be3-userdata-shm.mount: Deactivated successfully.
Oct 02 19:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec552415914de20eaac25976e537c7d3952ee93f9be482a053eb4906678aa5c7-merged.mount: Deactivated successfully.
Oct 02 19:44:11 compute-0 nova_compute[194781]: 2025-10-02 19:44:11.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:11 compute-0 nova_compute[194781]: 2025-10-02 19:44:11.011 2 INFO nova.compute.manager [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Took 0.43 seconds to destroy the instance on the hypervisor.
Oct 02 19:44:11 compute-0 nova_compute[194781]: 2025-10-02 19:44:11.012 2 DEBUG oslo.service.loopingcall [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:44:11 compute-0 nova_compute[194781]: 2025-10-02 19:44:11.013 2 DEBUG nova.compute.manager [-] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:44:11 compute-0 nova_compute[194781]: 2025-10-02 19:44:11.014 2 DEBUG nova.network.neutron [-] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:44:11 compute-0 podman[260280]: 2025-10-02 19:44:11.016711195 +0000 UTC m=+0.148441020 container cleanup 1d92da7ae4fe38534e0d4bc7f582bfd88c880bfd9daeb4246489b649a6c34be3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 19:44:11 compute-0 systemd[1]: libpod-conmon-1d92da7ae4fe38534e0d4bc7f582bfd88c880bfd9daeb4246489b649a6c34be3.scope: Deactivated successfully.
Oct 02 19:44:11 compute-0 nova_compute[194781]: 2025-10-02 19:44:11.110 2 DEBUG nova.compute.manager [req-513e6432-5a3f-498d-b9ea-94942ae1212a req-ccaaf201-7d83-4477-8195-c64c7b56480f fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Received event network-vif-unplugged-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:11 compute-0 nova_compute[194781]: 2025-10-02 19:44:11.111 2 DEBUG oslo_concurrency.lockutils [req-513e6432-5a3f-498d-b9ea-94942ae1212a req-ccaaf201-7d83-4477-8195-c64c7b56480f fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:11 compute-0 nova_compute[194781]: 2025-10-02 19:44:11.112 2 DEBUG oslo_concurrency.lockutils [req-513e6432-5a3f-498d-b9ea-94942ae1212a req-ccaaf201-7d83-4477-8195-c64c7b56480f fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:11 compute-0 nova_compute[194781]: 2025-10-02 19:44:11.113 2 DEBUG oslo_concurrency.lockutils [req-513e6432-5a3f-498d-b9ea-94942ae1212a req-ccaaf201-7d83-4477-8195-c64c7b56480f fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:11 compute-0 nova_compute[194781]: 2025-10-02 19:44:11.113 2 DEBUG nova.compute.manager [req-513e6432-5a3f-498d-b9ea-94942ae1212a req-ccaaf201-7d83-4477-8195-c64c7b56480f fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] No waiting events found dispatching network-vif-unplugged-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:44:11 compute-0 podman[260313]: 2025-10-02 19:44:11.113826444 +0000 UTC m=+0.065002554 container remove 1d92da7ae4fe38534e0d4bc7f582bfd88c880bfd9daeb4246489b649a6c34be3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:44:11 compute-0 nova_compute[194781]: 2025-10-02 19:44:11.114 2 DEBUG nova.compute.manager [req-513e6432-5a3f-498d-b9ea-94942ae1212a req-ccaaf201-7d83-4477-8195-c64c7b56480f fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Received event network-vif-unplugged-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 19:44:11 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:11.128 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[07d8ad88-f274-4503-aa57-00430a9785d3]: (4, ('Thu Oct  2 07:44:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da (1d92da7ae4fe38534e0d4bc7f582bfd88c880bfd9daeb4246489b649a6c34be3)\n1d92da7ae4fe38534e0d4bc7f582bfd88c880bfd9daeb4246489b649a6c34be3\nThu Oct  2 07:44:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da (1d92da7ae4fe38534e0d4bc7f582bfd88c880bfd9daeb4246489b649a6c34be3)\n1d92da7ae4fe38534e0d4bc7f582bfd88c880bfd9daeb4246489b649a6c34be3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:11 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:11.130 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[e0bb7a21-e8ec-491a-b770-0b61443a955e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:11 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:11.131 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc07a9d85-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:11 compute-0 nova_compute[194781]: 2025-10-02 19:44:11.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:11 compute-0 kernel: tapc07a9d85-90: left promiscuous mode
Oct 02 19:44:11 compute-0 nova_compute[194781]: 2025-10-02 19:44:11.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:11 compute-0 nova_compute[194781]: 2025-10-02 19:44:11.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:11 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:11.158 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[6e75c6a9-61ee-47f2-980d-bbaf9ce6ff9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:11 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:11.186 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[17463117-1b36-4407-ac72-9a507a808403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:11 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:11.188 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[98f251b9-7cf2-48b8-85a7-d2b9761b1ba6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:11 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:11.203 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa84eec-9ff2-4a21-8df1-8f0bc138dbb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532992, 'reachable_time': 19374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260328, 'error': None, 'target': 'ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:11 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:11.207 106060 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c07a9d85-90af-47c3-a2ed-3103aaadb7da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 19:44:11 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:11.207 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fbc3f5-c91a-429f-affa-6411a2f16841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:11 compute-0 systemd[1]: run-netns-ovnmeta\x2dc07a9d85\x2d90af\x2d47c3\x2da2ed\x2d3103aaadb7da.mount: Deactivated successfully.
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.157 2 DEBUG nova.network.neutron [-] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.172 2 INFO nova.compute.manager [-] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Took 1.16 seconds to deallocate network for instance.
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.223 2 DEBUG oslo_concurrency.lockutils [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.224 2 DEBUG oslo_concurrency.lockutils [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.284 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Acquiring lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.284 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.309 2 DEBUG nova.compute.manager [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.375 2 DEBUG nova.compute.provider_tree [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.382 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.392 2 DEBUG nova.scheduler.client.report [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.410 2 DEBUG oslo_concurrency.lockutils [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.413 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.424 2 DEBUG nova.virt.hardware [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.424 2 INFO nova.compute.claims [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.443 2 INFO nova.scheduler.client.report [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Deleted allocations for instance fd018206-5b5d-4759-8481-a7dd68c01a2e
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.532 2 DEBUG oslo_concurrency.lockutils [None req-e090816a-8c9c-4140-8539-1a9f2e54cdd5 c5d286f2c6fa49b2bded7a673c5a9d52 bfa01cf9d3eb4388bef0e350af472762 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.644 2 DEBUG nova.compute.provider_tree [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.661 2 DEBUG nova.scheduler.client.report [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.686 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.687 2 DEBUG nova.compute.manager [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.740 2 DEBUG nova.compute.manager [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.741 2 DEBUG nova.network.neutron [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:44:12 compute-0 podman[260329]: 2025-10-02 19:44:12.752442287 +0000 UTC m=+0.120771161 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 02 19:44:12 compute-0 podman[260331]: 2025-10-02 19:44:12.765732481 +0000 UTC m=+0.103479900 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.765 2 INFO nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:44:12 compute-0 podman[260330]: 2025-10-02 19:44:12.774215487 +0000 UTC m=+0.120546595 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0)
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.788 2 DEBUG nova.compute.manager [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.878 2 DEBUG nova.compute.manager [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.880 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.881 2 INFO nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Creating image(s)
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.881 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Acquiring lock "/var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.882 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "/var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.883 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "/var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.895 2 DEBUG oslo_concurrency.processutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.996 2 DEBUG oslo_concurrency.processutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.998 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Acquiring lock "a9843d922d50b317c389e448cbaaf7849a9d0409" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:12 compute-0 nova_compute[194781]: 2025-10-02 19:44:12.998 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.012 2 DEBUG oslo_concurrency.processutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.032 2 DEBUG nova.policy [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6477d2ef96bd4c318dea2a18da231121', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a0363243e85d429c956681904cf9714d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.072 2 DEBUG oslo_concurrency.processutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.074 2 DEBUG oslo_concurrency.processutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.120 2 DEBUG oslo_concurrency.processutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.122 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.124 2 DEBUG oslo_concurrency.processutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.185 2 DEBUG oslo_concurrency.processutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.187 2 DEBUG nova.virt.disk.api [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Checking if we can resize image /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.188 2 DEBUG oslo_concurrency.processutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.267 2 DEBUG nova.compute.manager [req-f2c3fc07-f632-47b7-8fc3-dd914259fd29 req-871c267c-a0ca-4826-aaff-edb0ce55c4b4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Received event network-vif-plugged-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.268 2 DEBUG oslo_concurrency.lockutils [req-f2c3fc07-f632-47b7-8fc3-dd914259fd29 req-871c267c-a0ca-4826-aaff-edb0ce55c4b4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.269 2 DEBUG oslo_concurrency.lockutils [req-f2c3fc07-f632-47b7-8fc3-dd914259fd29 req-871c267c-a0ca-4826-aaff-edb0ce55c4b4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.270 2 DEBUG oslo_concurrency.lockutils [req-f2c3fc07-f632-47b7-8fc3-dd914259fd29 req-871c267c-a0ca-4826-aaff-edb0ce55c4b4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "fd018206-5b5d-4759-8481-a7dd68c01a2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.271 2 DEBUG nova.compute.manager [req-f2c3fc07-f632-47b7-8fc3-dd914259fd29 req-871c267c-a0ca-4826-aaff-edb0ce55c4b4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] No waiting events found dispatching network-vif-plugged-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.271 2 WARNING nova.compute.manager [req-f2c3fc07-f632-47b7-8fc3-dd914259fd29 req-871c267c-a0ca-4826-aaff-edb0ce55c4b4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Received unexpected event network-vif-plugged-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 for instance with vm_state deleted and task_state None.
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.272 2 DEBUG nova.compute.manager [req-f2c3fc07-f632-47b7-8fc3-dd914259fd29 req-871c267c-a0ca-4826-aaff-edb0ce55c4b4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Received event network-vif-deleted-93a8e2fd-ae8f-42b3-9d35-ccc65e4f1d66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.287 2 DEBUG oslo_concurrency.processutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.288 2 DEBUG nova.virt.disk.api [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Cannot resize image /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.289 2 DEBUG nova.objects.instance [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lazy-loading 'migration_context' on Instance uuid 9f5d3eac-e68c-4a0e-8679-0880a0c51bab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.310 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.311 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Ensure instance console log exists: /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.312 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.313 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:13 compute-0 nova_compute[194781]: 2025-10-02 19:44:13.314 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:14 compute-0 nova_compute[194781]: 2025-10-02 19:44:14.189 2 DEBUG nova.network.neutron [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Successfully created port: bb1981a1-d5bc-4236-97ff-2763b967de6c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 19:44:14 compute-0 nova_compute[194781]: 2025-10-02 19:44:14.879 2 DEBUG nova.network.neutron [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Successfully updated port: bb1981a1-d5bc-4236-97ff-2763b967de6c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:44:14 compute-0 nova_compute[194781]: 2025-10-02 19:44:14.893 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Acquiring lock "refresh_cache-9f5d3eac-e68c-4a0e-8679-0880a0c51bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:44:14 compute-0 nova_compute[194781]: 2025-10-02 19:44:14.894 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Acquired lock "refresh_cache-9f5d3eac-e68c-4a0e-8679-0880a0c51bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:44:14 compute-0 nova_compute[194781]: 2025-10-02 19:44:14.895 2 DEBUG nova.network.neutron [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:44:14 compute-0 nova_compute[194781]: 2025-10-02 19:44:14.982 2 DEBUG nova.compute.manager [req-1b2ff03c-c5ae-4d2b-8b6a-6b3c60754c05 req-92d6ba75-9e14-47c9-96a7-ae1eeb944cec fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Received event network-changed-bb1981a1-d5bc-4236-97ff-2763b967de6c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:14 compute-0 nova_compute[194781]: 2025-10-02 19:44:14.983 2 DEBUG nova.compute.manager [req-1b2ff03c-c5ae-4d2b-8b6a-6b3c60754c05 req-92d6ba75-9e14-47c9-96a7-ae1eeb944cec fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Refreshing instance network info cache due to event network-changed-bb1981a1-d5bc-4236-97ff-2763b967de6c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:44:14 compute-0 nova_compute[194781]: 2025-10-02 19:44:14.984 2 DEBUG oslo_concurrency.lockutils [req-1b2ff03c-c5ae-4d2b-8b6a-6b3c60754c05 req-92d6ba75-9e14-47c9-96a7-ae1eeb944cec fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-9f5d3eac-e68c-4a0e-8679-0880a0c51bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:44:15 compute-0 nova_compute[194781]: 2025-10-02 19:44:15.043 2 DEBUG nova.network.neutron [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:44:15 compute-0 nova_compute[194781]: 2025-10-02 19:44:15.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.300 2 DEBUG nova.network.neutron [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Updating instance_info_cache with network_info: [{"id": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "address": "fa:16:3e:b1:3d:38", "network": {"id": "61aead9f-19ea-477e-b1cf-20f3fec72d79", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2111893398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0363243e85d429c956681904cf9714d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1981a1-d5", "ovs_interfaceid": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.322 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Releasing lock "refresh_cache-9f5d3eac-e68c-4a0e-8679-0880a0c51bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.324 2 DEBUG nova.compute.manager [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Instance network_info: |[{"id": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "address": "fa:16:3e:b1:3d:38", "network": {"id": "61aead9f-19ea-477e-b1cf-20f3fec72d79", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2111893398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0363243e85d429c956681904cf9714d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1981a1-d5", "ovs_interfaceid": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.326 2 DEBUG oslo_concurrency.lockutils [req-1b2ff03c-c5ae-4d2b-8b6a-6b3c60754c05 req-92d6ba75-9e14-47c9-96a7-ae1eeb944cec fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-9f5d3eac-e68c-4a0e-8679-0880a0c51bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.329 2 DEBUG nova.network.neutron [req-1b2ff03c-c5ae-4d2b-8b6a-6b3c60754c05 req-92d6ba75-9e14-47c9-96a7-ae1eeb944cec fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Refreshing network info cache for port bb1981a1-d5bc-4236-97ff-2763b967de6c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.335 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Start _get_guest_xml network_info=[{"id": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "address": "fa:16:3e:b1:3d:38", "network": {"id": "61aead9f-19ea-477e-b1cf-20f3fec72d79", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2111893398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0363243e85d429c956681904cf9714d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1981a1-d5", "ovs_interfaceid": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': 'c191839f-7364-41ce-80c8-eff8077fc750'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.360 2 WARNING nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.370 2 DEBUG nova.virt.libvirt.host [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.371 2 DEBUG nova.virt.libvirt.host [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.385 2 DEBUG nova.virt.libvirt.host [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.386 2 DEBUG nova.virt.libvirt.host [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.387 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.388 2 DEBUG nova.virt.hardware [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:40:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7ab5ea96-81dd-4496-8a1f-012f7d2c53c5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.389 2 DEBUG nova.virt.hardware [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.390 2 DEBUG nova.virt.hardware [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.391 2 DEBUG nova.virt.hardware [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.391 2 DEBUG nova.virt.hardware [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.392 2 DEBUG nova.virt.hardware [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.393 2 DEBUG nova.virt.hardware [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.394 2 DEBUG nova.virt.hardware [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.395 2 DEBUG nova.virt.hardware [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.397 2 DEBUG nova.virt.hardware [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.398 2 DEBUG nova.virt.hardware [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.403 2 DEBUG nova.virt.libvirt.vif [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:44:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1972367434',display_name='tempest-TestServerBasicOps-server-1972367434',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1972367434',id=12,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF2ZtyfUubtp8cheeFyoIba9G5o+ZW6wuKNTuSzPdheIihIAcfNRwabevQg8r7wCcTt89oafysBrW1H/16794EDH2Pe1JdvkSavQZRaYm7HhE4A4CEuh2libnTsyYV87Gw==',key_name='tempest-TestServerBasicOps-1578654810',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0363243e85d429c956681904cf9714d',ramdisk_id='',reservation_id='r-fmy0wsat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1707036300',owner_user_name='tempest-TestServerBasicOps-1707036300-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:44:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6477d2ef96bd4c318dea2a18da231121',uuid=9f5d3eac-e68c-4a0e-8679-0880a0c51bab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "address": "fa:16:3e:b1:3d:38", "network": {"id": "61aead9f-19ea-477e-b1cf-20f3fec72d79", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2111893398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0363243e85d429c956681904cf9714d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1981a1-d5", "ovs_interfaceid": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.404 2 DEBUG nova.network.os_vif_util [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Converting VIF {"id": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "address": "fa:16:3e:b1:3d:38", "network": {"id": "61aead9f-19ea-477e-b1cf-20f3fec72d79", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2111893398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0363243e85d429c956681904cf9714d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1981a1-d5", "ovs_interfaceid": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.406 2 DEBUG nova.network.os_vif_util [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:3d:38,bridge_name='br-int',has_traffic_filtering=True,id=bb1981a1-d5bc-4236-97ff-2763b967de6c,network=Network(61aead9f-19ea-477e-b1cf-20f3fec72d79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb1981a1-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.408 2 DEBUG nova.objects.instance [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f5d3eac-e68c-4a0e-8679-0880a0c51bab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.420 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:44:16 compute-0 nova_compute[194781]:   <uuid>9f5d3eac-e68c-4a0e-8679-0880a0c51bab</uuid>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   <name>instance-0000000c</name>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   <memory>131072</memory>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <nova:name>tempest-TestServerBasicOps-server-1972367434</nova:name>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:44:16</nova:creationTime>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <nova:flavor name="m1.nano">
Oct 02 19:44:16 compute-0 nova_compute[194781]:         <nova:memory>128</nova:memory>
Oct 02 19:44:16 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:44:16 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:44:16 compute-0 nova_compute[194781]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 19:44:16 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:44:16 compute-0 nova_compute[194781]:         <nova:user uuid="6477d2ef96bd4c318dea2a18da231121">tempest-TestServerBasicOps-1707036300-project-member</nova:user>
Oct 02 19:44:16 compute-0 nova_compute[194781]:         <nova:project uuid="a0363243e85d429c956681904cf9714d">tempest-TestServerBasicOps-1707036300</nova:project>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="c191839f-7364-41ce-80c8-eff8077fc750"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:44:16 compute-0 nova_compute[194781]:         <nova:port uuid="bb1981a1-d5bc-4236-97ff-2763b967de6c">
Oct 02 19:44:16 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <system>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <entry name="serial">9f5d3eac-e68c-4a0e-8679-0880a0c51bab</entry>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <entry name="uuid">9f5d3eac-e68c-4a0e-8679-0880a0c51bab</entry>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     </system>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   <os>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   </os>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   <features>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   </features>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk.config"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:b1:3d:38"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <target dev="tapbb1981a1-d5"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/console.log" append="off"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <video>
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     </video>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:44:16 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:44:16 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:44:16 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:44:16 compute-0 nova_compute[194781]: </domain>
Oct 02 19:44:16 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.427 2 DEBUG nova.compute.manager [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Preparing to wait for external event network-vif-plugged-bb1981a1-d5bc-4236-97ff-2763b967de6c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.428 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Acquiring lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.429 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.430 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.432 2 DEBUG nova.virt.libvirt.vif [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:44:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1972367434',display_name='tempest-TestServerBasicOps-server-1972367434',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1972367434',id=12,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF2ZtyfUubtp8cheeFyoIba9G5o+ZW6wuKNTuSzPdheIihIAcfNRwabevQg8r7wCcTt89oafysBrW1H/16794EDH2Pe1JdvkSavQZRaYm7HhE4A4CEuh2libnTsyYV87Gw==',key_name='tempest-TestServerBasicOps-1578654810',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0363243e85d429c956681904cf9714d',ramdisk_id='',reservation_id='r-fmy0wsat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1707036300',owner_user_name='tempest-TestServerBasicOps-1707036300-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:44:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6477d2ef96bd4c318dea2a18da231121',uuid=9f5d3eac-e68c-4a0e-8679-0880a0c51bab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "address": "fa:16:3e:b1:3d:38", "network": {"id": "61aead9f-19ea-477e-b1cf-20f3fec72d79", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2111893398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0363243e85d429c956681904cf9714d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1981a1-d5", "ovs_interfaceid": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.432 2 DEBUG nova.network.os_vif_util [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Converting VIF {"id": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "address": "fa:16:3e:b1:3d:38", "network": {"id": "61aead9f-19ea-477e-b1cf-20f3fec72d79", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2111893398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0363243e85d429c956681904cf9714d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1981a1-d5", "ovs_interfaceid": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.434 2 DEBUG nova.network.os_vif_util [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:3d:38,bridge_name='br-int',has_traffic_filtering=True,id=bb1981a1-d5bc-4236-97ff-2763b967de6c,network=Network(61aead9f-19ea-477e-b1cf-20f3fec72d79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb1981a1-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.435 2 DEBUG os_vif [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:3d:38,bridge_name='br-int',has_traffic_filtering=True,id=bb1981a1-d5bc-4236-97ff-2763b967de6c,network=Network(61aead9f-19ea-477e-b1cf-20f3fec72d79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb1981a1-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.437 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.444 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb1981a1-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.444 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbb1981a1-d5, col_values=(('external_ids', {'iface-id': 'bb1981a1-d5bc-4236-97ff-2763b967de6c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:3d:38', 'vm-uuid': '9f5d3eac-e68c-4a0e-8679-0880a0c51bab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:16 compute-0 NetworkManager[52324]: <info>  [1759434256.4500] manager: (tapbb1981a1-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.462 2 INFO os_vif [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:3d:38,bridge_name='br-int',has_traffic_filtering=True,id=bb1981a1-d5bc-4236-97ff-2763b967de6c,network=Network(61aead9f-19ea-477e-b1cf-20f3fec72d79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb1981a1-d5')
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.540 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.541 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.541 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] No VIF found with MAC fa:16:3e:b1:3d:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.542 2 INFO nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Using config drive
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.946 2 INFO nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Creating config drive at /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk.config
Oct 02 19:44:16 compute-0 nova_compute[194781]: 2025-10-02 19:44:16.960 2 DEBUG oslo_concurrency.processutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcu0njg6z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.094 2 DEBUG oslo_concurrency.processutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcu0njg6z" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:17 compute-0 kernel: tapbb1981a1-d5: entered promiscuous mode
Oct 02 19:44:17 compute-0 NetworkManager[52324]: <info>  [1759434257.1726] manager: (tapbb1981a1-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:17 compute-0 ovn_controller[97052]: 2025-10-02T19:44:17Z|00142|binding|INFO|Claiming lport bb1981a1-d5bc-4236-97ff-2763b967de6c for this chassis.
Oct 02 19:44:17 compute-0 ovn_controller[97052]: 2025-10-02T19:44:17Z|00143|binding|INFO|bb1981a1-d5bc-4236-97ff-2763b967de6c: Claiming fa:16:3e:b1:3d:38 10.100.0.4
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.204 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:3d:38 10.100.0.4'], port_security=['fa:16:3e:b1:3d:38 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9f5d3eac-e68c-4a0e-8679-0880a0c51bab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61aead9f-19ea-477e-b1cf-20f3fec72d79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0363243e85d429c956681904cf9714d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be934c85-b635-4553-97f2-e134629b726f e14909a1-3afd-4652-b1d9-0e53b8dc4567', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03b146ad-089c-4a5e-8793-a1df4c7b2b23, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=bb1981a1-d5bc-4236-97ff-2763b967de6c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.206 105943 INFO neutron.agent.ovn.metadata.agent [-] Port bb1981a1-d5bc-4236-97ff-2763b967de6c in datapath 61aead9f-19ea-477e-b1cf-20f3fec72d79 bound to our chassis
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.207 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61aead9f-19ea-477e-b1cf-20f3fec72d79
Oct 02 19:44:17 compute-0 systemd-udevd[260418]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:17 compute-0 ovn_controller[97052]: 2025-10-02T19:44:17Z|00144|binding|INFO|Setting lport bb1981a1-d5bc-4236-97ff-2763b967de6c ovn-installed in OVS
Oct 02 19:44:17 compute-0 ovn_controller[97052]: 2025-10-02T19:44:17Z|00145|binding|INFO|Setting lport bb1981a1-d5bc-4236-97ff-2763b967de6c up in Southbound
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.223 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[11f0d3aa-0042-4286-a545-d702bac34f46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.224 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap61aead9f-11 in ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.229 246899 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap61aead9f-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.229 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[34191729-2ecb-4981-a16c-3dd7f399365e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.230 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[d2db4498-2e2f-4405-a735-262fb520f2b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 NetworkManager[52324]: <info>  [1759434257.2326] device (tapbb1981a1-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:44:17 compute-0 NetworkManager[52324]: <info>  [1759434257.2420] device (tapbb1981a1-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.243 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[25dc3330-28c5-4270-9158-0026870ce828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 systemd-machined[154795]: New machine qemu-13-instance-0000000c.
Oct 02 19:44:17 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.274 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[8e74d6fc-069c-47be-85d6-30c767b53002]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.314 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[2adeea39-2800-4194-90f8-0aa0e0f58219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 NetworkManager[52324]: <info>  [1759434257.3251] manager: (tap61aead9f-10): new Veth device (/org/freedesktop/NetworkManager/Devices/66)
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.322 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[cc747103-8865-4977-aa15-bb2b0827ea65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.366 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d48c61-6a31-45d4-b1b1-2769296c55b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.370 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[1467c465-fa83-4e0e-afa5-54f8da04a7d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 NetworkManager[52324]: <info>  [1759434257.3978] device (tap61aead9f-10): carrier: link connected
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.404 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[010c7886-fdeb-40b4-aaeb-47988e9f0bff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.427 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[3f888468-c5c4-4d06-94f2-9216e16304c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61aead9f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:4b:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541802, 'reachable_time': 42020, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260454, 'error': None, 'target': 'ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.446 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[3716abd3-f52a-4f87-b3e4-5e7c856a0684]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:4b1c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 541802, 'tstamp': 541802}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260455, 'error': None, 'target': 'ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.469 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8794c0-6d39-4335-8438-e2941acc60ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61aead9f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:4b:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541802, 'reachable_time': 42020, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260456, 'error': None, 'target': 'ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.512 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[cd97ef35-12d2-4ad6-8c19-50e3991e86e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.601 2 DEBUG nova.compute.manager [req-12d19f10-339d-4f47-a938-c6d71073cb9f req-88e0b5e0-02ef-44bc-a58b-73edc25b6670 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Received event network-vif-plugged-bb1981a1-d5bc-4236-97ff-2763b967de6c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.601 2 DEBUG oslo_concurrency.lockutils [req-12d19f10-339d-4f47-a938-c6d71073cb9f req-88e0b5e0-02ef-44bc-a58b-73edc25b6670 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.603 2 DEBUG oslo_concurrency.lockutils [req-12d19f10-339d-4f47-a938-c6d71073cb9f req-88e0b5e0-02ef-44bc-a58b-73edc25b6670 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.603 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[2b256b18-e7c7-41c6-9662-d83b9a1c9858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.606 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61aead9f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.606 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.605 2 DEBUG oslo_concurrency.lockutils [req-12d19f10-339d-4f47-a938-c6d71073cb9f req-88e0b5e0-02ef-44bc-a58b-73edc25b6670 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.607 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61aead9f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:17 compute-0 NetworkManager[52324]: <info>  [1759434257.6111] manager: (tap61aead9f-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Oct 02 19:44:17 compute-0 kernel: tap61aead9f-10: entered promiscuous mode
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.607 2 DEBUG nova.compute.manager [req-12d19f10-339d-4f47-a938-c6d71073cb9f req-88e0b5e0-02ef-44bc-a58b-73edc25b6670 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Processing event network-vif-plugged-bb1981a1-d5bc-4236-97ff-2763b967de6c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.619 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61aead9f-10, col_values=(('external_ids', {'iface-id': '0e132986-681b-4e69-9066-5d6f6dd06694'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:17 compute-0 ovn_controller[97052]: 2025-10-02T19:44:17Z|00146|binding|INFO|Releasing lport 0e132986-681b-4e69-9066-5d6f6dd06694 from this chassis (sb_readonly=0)
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.626 105943 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/61aead9f-19ea-477e-b1cf-20f3fec72d79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/61aead9f-19ea-477e-b1cf-20f3fec72d79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.627 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[e014db0d-a12b-4b35-8339-665fcbbf6073]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.629 105943 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: global
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     log         /dev/log local0 debug
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     log-tag     haproxy-metadata-proxy-61aead9f-19ea-477e-b1cf-20f3fec72d79
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     user        root
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     group       root
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     maxconn     1024
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     pidfile     /var/lib/neutron/external/pids/61aead9f-19ea-477e-b1cf-20f3fec72d79.pid.haproxy
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     daemon
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: defaults
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     log global
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     mode http
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     option httplog
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     option dontlognull
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     option http-server-close
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     option forwardfor
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     retries                 3
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     timeout http-request    30s
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     timeout connect         30s
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     timeout client          32s
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     timeout server          32s
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     timeout http-keep-alive 30s
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: listen listener
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     bind 169.254.169.254:80
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:     http-request add-header X-OVN-Network-ID 61aead9f-19ea-477e-b1cf-20f3fec72d79
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 19:44:17 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:17.630 105943 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79', 'env', 'PROCESS_TAG=haproxy-61aead9f-19ea-477e-b1cf-20f3fec72d79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/61aead9f-19ea-477e-b1cf-20f3fec72d79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.990 2 DEBUG nova.network.neutron [req-1b2ff03c-c5ae-4d2b-8b6a-6b3c60754c05 req-92d6ba75-9e14-47c9-96a7-ae1eeb944cec fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Updated VIF entry in instance network info cache for port bb1981a1-d5bc-4236-97ff-2763b967de6c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:44:17 compute-0 nova_compute[194781]: 2025-10-02 19:44:17.991 2 DEBUG nova.network.neutron [req-1b2ff03c-c5ae-4d2b-8b6a-6b3c60754c05 req-92d6ba75-9e14-47c9-96a7-ae1eeb944cec fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Updating instance_info_cache with network_info: [{"id": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "address": "fa:16:3e:b1:3d:38", "network": {"id": "61aead9f-19ea-477e-b1cf-20f3fec72d79", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2111893398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0363243e85d429c956681904cf9714d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1981a1-d5", "ovs_interfaceid": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.009 2 DEBUG oslo_concurrency.lockutils [req-1b2ff03c-c5ae-4d2b-8b6a-6b3c60754c05 req-92d6ba75-9e14-47c9-96a7-ae1eeb944cec fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-9f5d3eac-e68c-4a0e-8679-0880a0c51bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:44:18 compute-0 podman[260494]: 2025-10-02 19:44:18.134567276 +0000 UTC m=+0.102987067 container create 0807ccf170836ffdd201c8ee96eca9eda0e9aad7e5092acdd541b6f9949f1121 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 19:44:18 compute-0 podman[260494]: 2025-10-02 19:44:18.072596004 +0000 UTC m=+0.041015815 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:44:18 compute-0 systemd[1]: Started libpod-conmon-0807ccf170836ffdd201c8ee96eca9eda0e9aad7e5092acdd541b6f9949f1121.scope.
Oct 02 19:44:18 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19398ef839772b47ec29fddbb4af540c753880e7e5f34e9287f1d2227d4737d9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 19:44:18 compute-0 podman[260494]: 2025-10-02 19:44:18.278277918 +0000 UTC m=+0.246697689 container init 0807ccf170836ffdd201c8ee96eca9eda0e9aad7e5092acdd541b6f9949f1121 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:44:18 compute-0 podman[260494]: 2025-10-02 19:44:18.292866227 +0000 UTC m=+0.261285998 container start 0807ccf170836ffdd201c8ee96eca9eda0e9aad7e5092acdd541b6f9949f1121 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:44:18 compute-0 neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79[260509]: [NOTICE]   (260513) : New worker (260515) forked
Oct 02 19:44:18 compute-0 neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79[260509]: [NOTICE]   (260513) : Loading success.
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.553 2 DEBUG nova.compute.manager [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.563 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434258.5627403, 9f5d3eac-e68c-4a0e-8679-0880a0c51bab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.563 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] VM Started (Lifecycle Event)
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.567 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.572 2 INFO nova.virt.libvirt.driver [-] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Instance spawned successfully.
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.572 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.600 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.609 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.615 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.615 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.616 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.616 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.616 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.617 2 DEBUG nova.virt.libvirt.driver [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.656 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.657 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434258.5628717, 9f5d3eac-e68c-4a0e-8679-0880a0c51bab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.657 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] VM Paused (Lifecycle Event)
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.705 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.711 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434258.5657287, 9f5d3eac-e68c-4a0e-8679-0880a0c51bab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.711 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] VM Resumed (Lifecycle Event)
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.741 2 INFO nova.compute.manager [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Took 5.86 seconds to spawn the instance on the hypervisor.
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.741 2 DEBUG nova.compute.manager [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.742 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.755 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.789 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.819 2 INFO nova.compute.manager [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Took 6.46 seconds to build instance.
Oct 02 19:44:18 compute-0 nova_compute[194781]: 2025-10-02 19:44:18.840 2 DEBUG oslo_concurrency.lockutils [None req-3dc9c27d-1f40-4c4b-b149-1a71583dbf40 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:19 compute-0 ovn_controller[97052]: 2025-10-02T19:44:19Z|00147|binding|INFO|Releasing lport bd80466a-6146-45a7-be35-ec332e1ee93c from this chassis (sb_readonly=0)
Oct 02 19:44:19 compute-0 ovn_controller[97052]: 2025-10-02T19:44:19Z|00148|binding|INFO|Releasing lport aaa6ea3c-0164-44d4-b435-0c6c04e73e3f from this chassis (sb_readonly=0)
Oct 02 19:44:19 compute-0 ovn_controller[97052]: 2025-10-02T19:44:19Z|00149|binding|INFO|Releasing lport 0e132986-681b-4e69-9066-5d6f6dd06694 from this chassis (sb_readonly=0)
Oct 02 19:44:19 compute-0 ovn_controller[97052]: 2025-10-02T19:44:19Z|00150|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:44:19 compute-0 nova_compute[194781]: 2025-10-02 19:44:19.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:19 compute-0 nova_compute[194781]: 2025-10-02 19:44:19.696 2 DEBUG nova.compute.manager [req-d1896b89-4e72-48f5-9b19-667d95bdf40f req-b70ebcc7-54c9-42b9-8269-dfaf40279f0d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Received event network-vif-plugged-bb1981a1-d5bc-4236-97ff-2763b967de6c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:19 compute-0 nova_compute[194781]: 2025-10-02 19:44:19.697 2 DEBUG oslo_concurrency.lockutils [req-d1896b89-4e72-48f5-9b19-667d95bdf40f req-b70ebcc7-54c9-42b9-8269-dfaf40279f0d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:19 compute-0 nova_compute[194781]: 2025-10-02 19:44:19.697 2 DEBUG oslo_concurrency.lockutils [req-d1896b89-4e72-48f5-9b19-667d95bdf40f req-b70ebcc7-54c9-42b9-8269-dfaf40279f0d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:19 compute-0 nova_compute[194781]: 2025-10-02 19:44:19.697 2 DEBUG oslo_concurrency.lockutils [req-d1896b89-4e72-48f5-9b19-667d95bdf40f req-b70ebcc7-54c9-42b9-8269-dfaf40279f0d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:19 compute-0 nova_compute[194781]: 2025-10-02 19:44:19.698 2 DEBUG nova.compute.manager [req-d1896b89-4e72-48f5-9b19-667d95bdf40f req-b70ebcc7-54c9-42b9-8269-dfaf40279f0d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] No waiting events found dispatching network-vif-plugged-bb1981a1-d5bc-4236-97ff-2763b967de6c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:44:19 compute-0 nova_compute[194781]: 2025-10-02 19:44:19.698 2 WARNING nova.compute.manager [req-d1896b89-4e72-48f5-9b19-667d95bdf40f req-b70ebcc7-54c9-42b9-8269-dfaf40279f0d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Received unexpected event network-vif-plugged-bb1981a1-d5bc-4236-97ff-2763b967de6c for instance with vm_state active and task_state None.
Oct 02 19:44:20 compute-0 podman[260525]: 2025-10-02 19:44:20.73968011 +0000 UTC m=+0.110076106 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:44:20 compute-0 podman[260526]: 2025-10-02 19:44:20.73780087 +0000 UTC m=+0.091297516 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 19:44:21 compute-0 nova_compute[194781]: 2025-10-02 19:44:21.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:21 compute-0 nova_compute[194781]: 2025-10-02 19:44:21.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:22 compute-0 nova_compute[194781]: 2025-10-02 19:44:22.944 2 DEBUG nova.compute.manager [req-081caa54-e187-492e-b373-d9605bb80677 req-afe0b8c2-7ecb-4d44-9e9f-d7d53f5b2a0a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Received event network-changed-bb1981a1-d5bc-4236-97ff-2763b967de6c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:22 compute-0 nova_compute[194781]: 2025-10-02 19:44:22.944 2 DEBUG nova.compute.manager [req-081caa54-e187-492e-b373-d9605bb80677 req-afe0b8c2-7ecb-4d44-9e9f-d7d53f5b2a0a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Refreshing instance network info cache due to event network-changed-bb1981a1-d5bc-4236-97ff-2763b967de6c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:44:22 compute-0 nova_compute[194781]: 2025-10-02 19:44:22.945 2 DEBUG oslo_concurrency.lockutils [req-081caa54-e187-492e-b373-d9605bb80677 req-afe0b8c2-7ecb-4d44-9e9f-d7d53f5b2a0a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-9f5d3eac-e68c-4a0e-8679-0880a0c51bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:44:22 compute-0 nova_compute[194781]: 2025-10-02 19:44:22.945 2 DEBUG oslo_concurrency.lockutils [req-081caa54-e187-492e-b373-d9605bb80677 req-afe0b8c2-7ecb-4d44-9e9f-d7d53f5b2a0a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-9f5d3eac-e68c-4a0e-8679-0880a0c51bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:44:22 compute-0 nova_compute[194781]: 2025-10-02 19:44:22.946 2 DEBUG nova.network.neutron [req-081caa54-e187-492e-b373-d9605bb80677 req-afe0b8c2-7ecb-4d44-9e9f-d7d53f5b2a0a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Refreshing network info cache for port bb1981a1-d5bc-4236-97ff-2763b967de6c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:44:23 compute-0 podman[260564]: 2025-10-02 19:44:23.733881117 +0000 UTC m=+0.109479690 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:44:23 compute-0 podman[260565]: 2025-10-02 19:44:23.776154394 +0000 UTC m=+0.126517885 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:44:23 compute-0 nova_compute[194781]: 2025-10-02 19:44:23.861 2 DEBUG nova.network.neutron [req-081caa54-e187-492e-b373-d9605bb80677 req-afe0b8c2-7ecb-4d44-9e9f-d7d53f5b2a0a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Updated VIF entry in instance network info cache for port bb1981a1-d5bc-4236-97ff-2763b967de6c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:44:23 compute-0 nova_compute[194781]: 2025-10-02 19:44:23.861 2 DEBUG nova.network.neutron [req-081caa54-e187-492e-b373-d9605bb80677 req-afe0b8c2-7ecb-4d44-9e9f-d7d53f5b2a0a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Updating instance_info_cache with network_info: [{"id": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "address": "fa:16:3e:b1:3d:38", "network": {"id": "61aead9f-19ea-477e-b1cf-20f3fec72d79", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2111893398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0363243e85d429c956681904cf9714d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1981a1-d5", "ovs_interfaceid": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:24 compute-0 nova_compute[194781]: 2025-10-02 19:44:24.013 2 DEBUG oslo_concurrency.lockutils [req-081caa54-e187-492e-b373-d9605bb80677 req-afe0b8c2-7ecb-4d44-9e9f-d7d53f5b2a0a fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-9f5d3eac-e68c-4a0e-8679-0880a0c51bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:44:25 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:25.083 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:44:25 compute-0 nova_compute[194781]: 2025-10-02 19:44:25.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:25 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:25.084 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:44:25 compute-0 nova_compute[194781]: 2025-10-02 19:44:25.881 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759434250.8790424, fd018206-5b5d-4759-8481-a7dd68c01a2e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:44:25 compute-0 nova_compute[194781]: 2025-10-02 19:44:25.882 2 INFO nova.compute.manager [-] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] VM Stopped (Lifecycle Event)
Oct 02 19:44:25 compute-0 nova_compute[194781]: 2025-10-02 19:44:25.921 2 DEBUG nova.compute.manager [None req-17b11048-9af0-4578-b316-177795f93e9e - - - - - -] [instance: fd018206-5b5d-4759-8481-a7dd68c01a2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:44:26 compute-0 nova_compute[194781]: 2025-10-02 19:44:26.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:26 compute-0 nova_compute[194781]: 2025-10-02 19:44:26.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:29 compute-0 podman[209015]: time="2025-10-02T19:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:44:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35680 "" "Go-http-client/1.1"
Oct 02 19:44:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6626 "" "Go-http-client/1.1"
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.289 2 DEBUG oslo_concurrency.lockutils [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquiring lock "6eada58a-d077-43e5-ab40-dd45abbe38f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.289 2 DEBUG oslo_concurrency.lockutils [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.289 2 DEBUG oslo_concurrency.lockutils [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquiring lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.290 2 DEBUG oslo_concurrency.lockutils [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.290 2 DEBUG oslo_concurrency.lockutils [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.291 2 INFO nova.compute.manager [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Terminating instance
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.292 2 DEBUG nova.compute.manager [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:44:30 compute-0 kernel: tapb27e7b6f-4a (unregistering): left promiscuous mode
Oct 02 19:44:30 compute-0 NetworkManager[52324]: <info>  [1759434270.3603] device (tapb27e7b6f-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:44:30 compute-0 ovn_controller[97052]: 2025-10-02T19:44:30Z|00151|binding|INFO|Releasing lport b27e7b6f-4ab7-48d9-a674-eb640289b746 from this chassis (sb_readonly=0)
Oct 02 19:44:30 compute-0 ovn_controller[97052]: 2025-10-02T19:44:30Z|00152|binding|INFO|Setting lport b27e7b6f-4ab7-48d9-a674-eb640289b746 down in Southbound
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:30 compute-0 ovn_controller[97052]: 2025-10-02T19:44:30Z|00153|binding|INFO|Removing iface tapb27e7b6f-4a ovn-installed in OVS
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:30 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:30.402 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:84:0f 10.100.0.3'], port_security=['fa:16:3e:15:84:0f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6eada58a-d077-43e5-ab40-dd45abbe38f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d458e53358c4398b6ba6051d5c82805', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9d169388-279d-4835-af73-74628348527d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61d3b384-7807-48c7-ac4b-e6e147bd5ac4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=b27e7b6f-4ab7-48d9-a674-eb640289b746) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:44:30 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:30.403 105943 INFO neutron.agent.ovn.metadata.agent [-] Port b27e7b6f-4ab7-48d9-a674-eb640289b746 in datapath a4e44b64-c472-49fb-ac29-fcbb65fb1bdc unbound from our chassis
Oct 02 19:44:30 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:30.406 105943 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a4e44b64-c472-49fb-ac29-fcbb65fb1bdc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 19:44:30 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:30.407 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[50086da3-e2a4-45b3-a24d-05758c47948f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:30 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:30.407 105943 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc namespace which is not needed anymore
Oct 02 19:44:30 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 02 19:44:30 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000006.scope: Consumed 41.951s CPU time.
Oct 02 19:44:30 compute-0 systemd-machined[154795]: Machine qemu-12-instance-00000006 terminated.
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.573 2 INFO nova.virt.libvirt.driver [-] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Instance destroyed successfully.
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.574 2 DEBUG nova.objects.instance [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lazy-loading 'resources' on Instance uuid 6eada58a-d077-43e5-ab40-dd45abbe38f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.598 2 DEBUG nova.virt.libvirt.vif [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950508224',display_name='tempest-ServerActionsTestJSON-server-1950508224',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950508224',id=6,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDKQ3/bi48ARS3VXn9iWcKo/JXrKXcAcgt+LOQWkb1k3Pe3wzNtwmWDod3uxRQb5Dp+at+GfgNvvsZcS9q05pPmKjxF66rj7w8mLvCmgF8foOmp3mBcRf5ivcSaS/PCliQ==',key_name='tempest-keypair-1857372306',keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:42:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5d458e53358c4398b6ba6051d5c82805',ramdisk_id='',reservation_id='r-80w0dyeq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-897514974',owner_user_name='tempest-ServerActionsTestJSON-897514974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:43:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1de0891a14a8410da559e3197c8ff98b',uuid=6eada58a-d077-43e5-ab40-dd45abbe38f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.598 2 DEBUG nova.network.os_vif_util [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Converting VIF {"id": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "address": "fa:16:3e:15:84:0f", "network": {"id": "a4e44b64-c472-49fb-ac29-fcbb65fb1bdc", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-575966371-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d458e53358c4398b6ba6051d5c82805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27e7b6f-4a", "ovs_interfaceid": "b27e7b6f-4ab7-48d9-a674-eb640289b746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.599 2 DEBUG nova.network.os_vif_util [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:84:0f,bridge_name='br-int',has_traffic_filtering=True,id=b27e7b6f-4ab7-48d9-a674-eb640289b746,network=Network(a4e44b64-c472-49fb-ac29-fcbb65fb1bdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27e7b6f-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.599 2 DEBUG os_vif [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:84:0f,bridge_name='br-int',has_traffic_filtering=True,id=b27e7b6f-4ab7-48d9-a674-eb640289b746,network=Network(a4e44b64-c472-49fb-ac29-fcbb65fb1bdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27e7b6f-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb27e7b6f-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.606 2 INFO os_vif [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:84:0f,bridge_name='br-int',has_traffic_filtering=True,id=b27e7b6f-4ab7-48d9-a674-eb640289b746,network=Network(a4e44b64-c472-49fb-ac29-fcbb65fb1bdc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27e7b6f-4a')
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.607 2 INFO nova.virt.libvirt.driver [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Deleting instance files /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3_del
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.607 2 INFO nova.virt.libvirt.driver [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Deletion of /var/lib/nova/instances/6eada58a-d077-43e5-ab40-dd45abbe38f3_del complete
Oct 02 19:44:30 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[259912]: [NOTICE]   (259922) : haproxy version is 2.8.14-c23fe91
Oct 02 19:44:30 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[259912]: [NOTICE]   (259922) : path to executable is /usr/sbin/haproxy
Oct 02 19:44:30 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[259912]: [WARNING]  (259922) : Exiting Master process...
Oct 02 19:44:30 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[259912]: [WARNING]  (259922) : Exiting Master process...
Oct 02 19:44:30 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[259912]: [ALERT]    (259922) : Current worker (259925) exited with code 143 (Terminated)
Oct 02 19:44:30 compute-0 neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc[259912]: [WARNING]  (259922) : All workers exited. Exiting... (0)
Oct 02 19:44:30 compute-0 systemd[1]: libpod-b9d1afd7a25754b83a93a09396d7d2708a471b1b6f84ac2d5ebd824ca4a9eb08.scope: Deactivated successfully.
Oct 02 19:44:30 compute-0 podman[260641]: 2025-10-02 19:44:30.650659577 +0000 UTC m=+0.082327457 container died b9d1afd7a25754b83a93a09396d7d2708a471b1b6f84ac2d5ebd824ca4a9eb08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:44:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b9d1afd7a25754b83a93a09396d7d2708a471b1b6f84ac2d5ebd824ca4a9eb08-userdata-shm.mount: Deactivated successfully.
Oct 02 19:44:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea75340e71a1354d0e5c28e17cad6e03113255803dbcf679ca2f2b22332b1fcf-merged.mount: Deactivated successfully.
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.725 2 INFO nova.compute.manager [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Took 0.43 seconds to destroy the instance on the hypervisor.
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.725 2 DEBUG oslo.service.loopingcall [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.726 2 DEBUG nova.compute.manager [-] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.726 2 DEBUG nova.network.neutron [-] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:44:30 compute-0 podman[260641]: 2025-10-02 19:44:30.727106715 +0000 UTC m=+0.158774575 container cleanup b9d1afd7a25754b83a93a09396d7d2708a471b1b6f84ac2d5ebd824ca4a9eb08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:44:30 compute-0 systemd[1]: libpod-conmon-b9d1afd7a25754b83a93a09396d7d2708a471b1b6f84ac2d5ebd824ca4a9eb08.scope: Deactivated successfully.
Oct 02 19:44:30 compute-0 podman[260673]: 2025-10-02 19:44:30.811391032 +0000 UTC m=+0.057510564 container remove b9d1afd7a25754b83a93a09396d7d2708a471b1b6f84ac2d5ebd824ca4a9eb08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 19:44:30 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:30.820 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa40ca7-b4d3-4720-845d-eae60f7d9c95]: (4, ('Thu Oct  2 07:44:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc (b9d1afd7a25754b83a93a09396d7d2708a471b1b6f84ac2d5ebd824ca4a9eb08)\nb9d1afd7a25754b83a93a09396d7d2708a471b1b6f84ac2d5ebd824ca4a9eb08\nThu Oct  2 07:44:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc (b9d1afd7a25754b83a93a09396d7d2708a471b1b6f84ac2d5ebd824ca4a9eb08)\nb9d1afd7a25754b83a93a09396d7d2708a471b1b6f84ac2d5ebd824ca4a9eb08\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:30 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:30.822 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[1c0836f2-5b3b-4d55-911b-6dcb529a6b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:30 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:30.825 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4e44b64-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:30 compute-0 kernel: tapa4e44b64-c0: left promiscuous mode
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:30 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:30.842 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[dc97e4ef-c964-49d8-8bf6-308fb39abb3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:30 compute-0 nova_compute[194781]: 2025-10-02 19:44:30.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:30 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:30.858 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[c31a7e12-930b-43f0-8b46-ea096ad4b4ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:30 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:30.860 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[88ce8407-6a3c-4651-9c21-4aca8320fca2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:30 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:30.882 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[1575fab3-a2d6-4a33-9316-57b199cb226d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 536192, 'reachable_time': 21756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260702, 'error': None, 'target': 'ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:30 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:30.886 106060 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a4e44b64-c472-49fb-ac29-fcbb65fb1bdc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 19:44:30 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:30.886 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[35825abb-631a-40f7-a22f-bb879ffcbb84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:44:30 compute-0 systemd[1]: run-netns-ovnmeta\x2da4e44b64\x2dc472\x2d49fb\x2dac29\x2dfcbb65fb1bdc.mount: Deactivated successfully.
Oct 02 19:44:30 compute-0 podman[260674]: 2025-10-02 19:44:30.888258672 +0000 UTC m=+0.116133478 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:44:31 compute-0 nova_compute[194781]: 2025-10-02 19:44:31.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:31 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:31.087 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:44:31 compute-0 nova_compute[194781]: 2025-10-02 19:44:31.323 2 DEBUG nova.compute.manager [req-e3e58788-5fcc-4be5-b51a-1b838f598e19 req-e5f4999e-78d4-4127-b047-3c55410b2fc8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received event network-vif-unplugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:31 compute-0 nova_compute[194781]: 2025-10-02 19:44:31.324 2 DEBUG oslo_concurrency.lockutils [req-e3e58788-5fcc-4be5-b51a-1b838f598e19 req-e5f4999e-78d4-4127-b047-3c55410b2fc8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:31 compute-0 nova_compute[194781]: 2025-10-02 19:44:31.324 2 DEBUG oslo_concurrency.lockutils [req-e3e58788-5fcc-4be5-b51a-1b838f598e19 req-e5f4999e-78d4-4127-b047-3c55410b2fc8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:31 compute-0 nova_compute[194781]: 2025-10-02 19:44:31.324 2 DEBUG oslo_concurrency.lockutils [req-e3e58788-5fcc-4be5-b51a-1b838f598e19 req-e5f4999e-78d4-4127-b047-3c55410b2fc8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:31 compute-0 nova_compute[194781]: 2025-10-02 19:44:31.324 2 DEBUG nova.compute.manager [req-e3e58788-5fcc-4be5-b51a-1b838f598e19 req-e5f4999e-78d4-4127-b047-3c55410b2fc8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] No waiting events found dispatching network-vif-unplugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:44:31 compute-0 nova_compute[194781]: 2025-10-02 19:44:31.324 2 DEBUG nova.compute.manager [req-e3e58788-5fcc-4be5-b51a-1b838f598e19 req-e5f4999e-78d4-4127-b047-3c55410b2fc8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received event network-vif-unplugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 19:44:31 compute-0 openstack_network_exporter[211160]: ERROR   19:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:44:31 compute-0 openstack_network_exporter[211160]: ERROR   19:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:44:31 compute-0 openstack_network_exporter[211160]: ERROR   19:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:44:31 compute-0 openstack_network_exporter[211160]: ERROR   19:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:44:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:44:31 compute-0 openstack_network_exporter[211160]: ERROR   19:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:44:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:44:32 compute-0 nova_compute[194781]: 2025-10-02 19:44:32.124 2 DEBUG nova.network.neutron [-] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:32 compute-0 nova_compute[194781]: 2025-10-02 19:44:32.196 2 DEBUG nova.compute.manager [req-5e62feaf-6bd5-4033-97af-c1d292a4b183 req-aeb36c89-88b8-4368-a6f2-89f2b80430c7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received event network-vif-deleted-b27e7b6f-4ab7-48d9-a674-eb640289b746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:32 compute-0 nova_compute[194781]: 2025-10-02 19:44:32.196 2 INFO nova.compute.manager [req-5e62feaf-6bd5-4033-97af-c1d292a4b183 req-aeb36c89-88b8-4368-a6f2-89f2b80430c7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Neutron deleted interface b27e7b6f-4ab7-48d9-a674-eb640289b746; detaching it from the instance and deleting it from the info cache
Oct 02 19:44:32 compute-0 nova_compute[194781]: 2025-10-02 19:44:32.196 2 DEBUG nova.network.neutron [req-5e62feaf-6bd5-4033-97af-c1d292a4b183 req-aeb36c89-88b8-4368-a6f2-89f2b80430c7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:32 compute-0 nova_compute[194781]: 2025-10-02 19:44:32.198 2 INFO nova.compute.manager [-] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Took 1.47 seconds to deallocate network for instance.
Oct 02 19:44:32 compute-0 nova_compute[194781]: 2025-10-02 19:44:32.247 2 DEBUG nova.compute.manager [req-5e62feaf-6bd5-4033-97af-c1d292a4b183 req-aeb36c89-88b8-4368-a6f2-89f2b80430c7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Detach interface failed, port_id=b27e7b6f-4ab7-48d9-a674-eb640289b746, reason: Instance 6eada58a-d077-43e5-ab40-dd45abbe38f3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 19:44:32 compute-0 nova_compute[194781]: 2025-10-02 19:44:32.300 2 DEBUG oslo_concurrency.lockutils [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:32 compute-0 nova_compute[194781]: 2025-10-02 19:44:32.300 2 DEBUG oslo_concurrency.lockutils [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:32 compute-0 nova_compute[194781]: 2025-10-02 19:44:32.415 2 DEBUG nova.compute.provider_tree [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:44:32 compute-0 nova_compute[194781]: 2025-10-02 19:44:32.457 2 DEBUG nova.scheduler.client.report [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:44:32 compute-0 nova_compute[194781]: 2025-10-02 19:44:32.514 2 DEBUG oslo_concurrency.lockutils [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:32 compute-0 nova_compute[194781]: 2025-10-02 19:44:32.571 2 INFO nova.scheduler.client.report [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Deleted allocations for instance 6eada58a-d077-43e5-ab40-dd45abbe38f3
Oct 02 19:44:32 compute-0 nova_compute[194781]: 2025-10-02 19:44:32.684 2 DEBUG oslo_concurrency.lockutils [None req-04ded4c3-403c-4c01-a88c-a6936ce83772 1de0891a14a8410da559e3197c8ff98b 5d458e53358c4398b6ba6051d5c82805 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:33 compute-0 nova_compute[194781]: 2025-10-02 19:44:33.451 2 DEBUG nova.compute.manager [req-18df15e4-a018-4d0b-a722-afe9a0c6a451 req-f6a6ef2e-aa8d-409c-8086-cd80e55da29c fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received event network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:44:33 compute-0 nova_compute[194781]: 2025-10-02 19:44:33.452 2 DEBUG oslo_concurrency.lockutils [req-18df15e4-a018-4d0b-a722-afe9a0c6a451 req-f6a6ef2e-aa8d-409c-8086-cd80e55da29c fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:33 compute-0 nova_compute[194781]: 2025-10-02 19:44:33.452 2 DEBUG oslo_concurrency.lockutils [req-18df15e4-a018-4d0b-a722-afe9a0c6a451 req-f6a6ef2e-aa8d-409c-8086-cd80e55da29c fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:33 compute-0 nova_compute[194781]: 2025-10-02 19:44:33.452 2 DEBUG oslo_concurrency.lockutils [req-18df15e4-a018-4d0b-a722-afe9a0c6a451 req-f6a6ef2e-aa8d-409c-8086-cd80e55da29c fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "6eada58a-d077-43e5-ab40-dd45abbe38f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:33 compute-0 nova_compute[194781]: 2025-10-02 19:44:33.452 2 DEBUG nova.compute.manager [req-18df15e4-a018-4d0b-a722-afe9a0c6a451 req-f6a6ef2e-aa8d-409c-8086-cd80e55da29c fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] No waiting events found dispatching network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:44:33 compute-0 nova_compute[194781]: 2025-10-02 19:44:33.453 2 WARNING nova.compute.manager [req-18df15e4-a018-4d0b-a722-afe9a0c6a451 req-f6a6ef2e-aa8d-409c-8086-cd80e55da29c fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Received unexpected event network-vif-plugged-b27e7b6f-4ab7-48d9-a674-eb640289b746 for instance with vm_state deleted and task_state None.
Oct 02 19:44:35 compute-0 nova_compute[194781]: 2025-10-02 19:44:35.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:35 compute-0 nova_compute[194781]: 2025-10-02 19:44:35.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:35 compute-0 nova_compute[194781]: 2025-10-02 19:44:35.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:44:35 compute-0 nova_compute[194781]: 2025-10-02 19:44:35.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:36 compute-0 nova_compute[194781]: 2025-10-02 19:44:36.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.059 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.060 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.060 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.060 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:44:37 compute-0 ovn_controller[97052]: 2025-10-02T19:44:37Z|00154|binding|INFO|Releasing lport aaa6ea3c-0164-44d4-b435-0c6c04e73e3f from this chassis (sb_readonly=0)
Oct 02 19:44:37 compute-0 ovn_controller[97052]: 2025-10-02T19:44:37Z|00155|binding|INFO|Releasing lport 0e132986-681b-4e69-9066-5d6f6dd06694 from this chassis (sb_readonly=0)
Oct 02 19:44:37 compute-0 ovn_controller[97052]: 2025-10-02T19:44:37Z|00156|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.156 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.271 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.272 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.330 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.335 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.395 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.396 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.462 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.463 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.522 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.524 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.619 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.630 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.716 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.718 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:44:37 compute-0 nova_compute[194781]: 2025-10-02 19:44:37.786 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:44:38 compute-0 nova_compute[194781]: 2025-10-02 19:44:38.196 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:44:38 compute-0 nova_compute[194781]: 2025-10-02 19:44:38.198 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4778MB free_disk=72.38132858276367GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:44:38 compute-0 nova_compute[194781]: 2025-10-02 19:44:38.198 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:38 compute-0 nova_compute[194781]: 2025-10-02 19:44:38.199 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:38 compute-0 nova_compute[194781]: 2025-10-02 19:44:38.277 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:44:38 compute-0 nova_compute[194781]: 2025-10-02 19:44:38.277 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:44:38 compute-0 nova_compute[194781]: 2025-10-02 19:44:38.277 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 9f5d3eac-e68c-4a0e-8679-0880a0c51bab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:44:38 compute-0 nova_compute[194781]: 2025-10-02 19:44:38.277 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:44:38 compute-0 nova_compute[194781]: 2025-10-02 19:44:38.277 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:44:38 compute-0 nova_compute[194781]: 2025-10-02 19:44:38.358 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:44:38 compute-0 nova_compute[194781]: 2025-10-02 19:44:38.373 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:44:38 compute-0 nova_compute[194781]: 2025-10-02 19:44:38.395 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:44:38 compute-0 nova_compute[194781]: 2025-10-02 19:44:38.395 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:39 compute-0 nova_compute[194781]: 2025-10-02 19:44:39.396 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:39 compute-0 nova_compute[194781]: 2025-10-02 19:44:39.397 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:40 compute-0 nova_compute[194781]: 2025-10-02 19:44:40.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:40 compute-0 nova_compute[194781]: 2025-10-02 19:44:40.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:40 compute-0 podman[260738]: 2025-10-02 19:44:40.729087559 +0000 UTC m=+0.095504747 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Oct 02 19:44:40 compute-0 podman[260737]: 2025-10-02 19:44:40.768595403 +0000 UTC m=+0.136930933 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:44:41 compute-0 nova_compute[194781]: 2025-10-02 19:44:41.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:41 compute-0 nova_compute[194781]: 2025-10-02 19:44:41.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:43 compute-0 sshd-session[260773]: Connection closed by 220.154.129.88 port 34588
Oct 02 19:44:43 compute-0 podman[260776]: 2025-10-02 19:44:43.743764063 +0000 UTC m=+0.091947173 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:44:43 compute-0 podman[260775]: 2025-10-02 19:44:43.773932097 +0000 UTC m=+0.129282888 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, io.openshift.tags=base rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., release-0.7.12=, io.buildah.version=1.29.0, name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container)
Oct 02 19:44:43 compute-0 podman[260774]: 2025-10-02 19:44:43.805258402 +0000 UTC m=+0.159239708 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Oct 02 19:44:45 compute-0 nova_compute[194781]: 2025-10-02 19:44:45.567 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759434270.5666795, 6eada58a-d077-43e5-ab40-dd45abbe38f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:44:45 compute-0 nova_compute[194781]: 2025-10-02 19:44:45.568 2 INFO nova.compute.manager [-] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] VM Stopped (Lifecycle Event)
Oct 02 19:44:45 compute-0 nova_compute[194781]: 2025-10-02 19:44:45.593 2 DEBUG nova.compute.manager [None req-f123022e-9c69-4372-8dda-02c701e22883 - - - - - -] [instance: 6eada58a-d077-43e5-ab40-dd45abbe38f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:44:45 compute-0 nova_compute[194781]: 2025-10-02 19:44:45.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:46 compute-0 nova_compute[194781]: 2025-10-02 19:44:46.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:47.490 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:44:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:47.491 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:44:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:47.493 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:44:48 compute-0 nova_compute[194781]: 2025-10-02 19:44:48.028 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:48 compute-0 nova_compute[194781]: 2025-10-02 19:44:48.057 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:44:48 compute-0 nova_compute[194781]: 2025-10-02 19:44:48.058 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:44:48 compute-0 nova_compute[194781]: 2025-10-02 19:44:48.058 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:44:48 compute-0 nova_compute[194781]: 2025-10-02 19:44:48.306 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:44:48 compute-0 nova_compute[194781]: 2025-10-02 19:44:48.306 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:44:48 compute-0 nova_compute[194781]: 2025-10-02 19:44:48.306 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:44:48 compute-0 nova_compute[194781]: 2025-10-02 19:44:48.306 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:44:49 compute-0 ovn_controller[97052]: 2025-10-02T19:44:49Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b1:3d:38 10.100.0.4
Oct 02 19:44:49 compute-0 ovn_controller[97052]: 2025-10-02T19:44:49Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b1:3d:38 10.100.0.4
Oct 02 19:44:50 compute-0 nova_compute[194781]: 2025-10-02 19:44:50.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:50 compute-0 nova_compute[194781]: 2025-10-02 19:44:50.669 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:44:50 compute-0 nova_compute[194781]: 2025-10-02 19:44:50.713 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:44:50 compute-0 nova_compute[194781]: 2025-10-02 19:44:50.714 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:44:50 compute-0 nova_compute[194781]: 2025-10-02 19:44:50.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:51 compute-0 nova_compute[194781]: 2025-10-02 19:44:51.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:51 compute-0 podman[260840]: 2025-10-02 19:44:51.716836078 +0000 UTC m=+0.091210353 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:44:51 compute-0 podman[260841]: 2025-10-02 19:44:51.734061947 +0000 UTC m=+0.097753937 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:44:54 compute-0 podman[260879]: 2025-10-02 19:44:54.750873538 +0000 UTC m=+0.123211056 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:44:54 compute-0 podman[260880]: 2025-10-02 19:44:54.802001571 +0000 UTC m=+0.170060305 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:44:55 compute-0 nova_compute[194781]: 2025-10-02 19:44:55.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:55 compute-0 nova_compute[194781]: 2025-10-02 19:44:55.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:56 compute-0 nova_compute[194781]: 2025-10-02 19:44:56.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:44:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:56.591 106055 DEBUG eventlet.wsgi.server [-] (106055) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 02 19:44:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:56.593 106055 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Oct 02 19:44:56 compute-0 ovn_metadata_agent[105919]: Accept: */*
Oct 02 19:44:56 compute-0 ovn_metadata_agent[105919]: Connection: close
Oct 02 19:44:56 compute-0 ovn_metadata_agent[105919]: Content-Type: text/plain
Oct 02 19:44:56 compute-0 ovn_metadata_agent[105919]: Host: 169.254.169.254
Oct 02 19:44:56 compute-0 ovn_metadata_agent[105919]: User-Agent: curl/7.84.0
Oct 02 19:44:56 compute-0 ovn_metadata_agent[105919]: X-Forwarded-For: 10.100.0.4
Oct 02 19:44:56 compute-0 ovn_metadata_agent[105919]: X-Ovn-Network-Id: 61aead9f-19ea-477e-b1cf-20f3fec72d79 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:58.120 106055 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 02 19:44:58 compute-0 haproxy-metadata-proxy-61aead9f-19ea-477e-b1cf-20f3fec72d79[260515]: 10.100.0.4:38128 [02/Oct/2025:19:44:56.589] listener listener/metadata 0/0/0/1530/1530 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:58.122 106055 INFO eventlet.wsgi.server [-] 10.100.0.4,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.5288210
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:58.275 106055 DEBUG eventlet.wsgi.server [-] (106055) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:58.276 106055 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: Accept: */*
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: Connection: close
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: Content-Length: 100
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: Content-Type: application/x-www-form-urlencoded
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: Host: 169.254.169.254
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: User-Agent: curl/7.84.0
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: X-Forwarded-For: 10.100.0.4
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: X-Ovn-Network-Id: 61aead9f-19ea-477e-b1cf-20f3fec72d79
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:58.508 106055 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 02 19:44:58 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:44:58.509 106055 INFO eventlet.wsgi.server [-] 10.100.0.4,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2327361
Oct 02 19:44:58 compute-0 haproxy-metadata-proxy-61aead9f-19ea-477e-b1cf-20f3fec72d79[260515]: 10.100.0.4:38142 [02/Oct/2025:19:44:58.273] listener listener/metadata 0/0/0/235/235 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Oct 02 19:44:59 compute-0 podman[209015]: time="2025-10-02T19:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:44:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34447 "" "Go-http-client/1.1"
Oct 02 19:44:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6154 "" "Go-http-client/1.1"
Oct 02 19:45:00 compute-0 nova_compute[194781]: 2025-10-02 19:45:00.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:00 compute-0 nova_compute[194781]: 2025-10-02 19:45:00.906 2 DEBUG oslo_concurrency.lockutils [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Acquiring lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:00 compute-0 nova_compute[194781]: 2025-10-02 19:45:00.907 2 DEBUG oslo_concurrency.lockutils [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:00 compute-0 nova_compute[194781]: 2025-10-02 19:45:00.908 2 DEBUG oslo_concurrency.lockutils [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Acquiring lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:00 compute-0 nova_compute[194781]: 2025-10-02 19:45:00.909 2 DEBUG oslo_concurrency.lockutils [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:00 compute-0 nova_compute[194781]: 2025-10-02 19:45:00.909 2 DEBUG oslo_concurrency.lockutils [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:00 compute-0 nova_compute[194781]: 2025-10-02 19:45:00.912 2 INFO nova.compute.manager [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Terminating instance
Oct 02 19:45:00 compute-0 nova_compute[194781]: 2025-10-02 19:45:00.914 2 DEBUG nova.compute.manager [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:45:00 compute-0 kernel: tapbb1981a1-d5 (unregistering): left promiscuous mode
Oct 02 19:45:00 compute-0 NetworkManager[52324]: <info>  [1759434300.9660] device (tapbb1981a1-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:45:01 compute-0 ovn_controller[97052]: 2025-10-02T19:45:00Z|00157|binding|INFO|Releasing lport bb1981a1-d5bc-4236-97ff-2763b967de6c from this chassis (sb_readonly=0)
Oct 02 19:45:01 compute-0 ovn_controller[97052]: 2025-10-02T19:45:01Z|00158|binding|INFO|Setting lport bb1981a1-d5bc-4236-97ff-2763b967de6c down in Southbound
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:01 compute-0 ovn_controller[97052]: 2025-10-02T19:45:01Z|00159|binding|INFO|Removing iface tapbb1981a1-d5 ovn-installed in OVS
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:01.024 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:3d:38 10.100.0.4'], port_security=['fa:16:3e:b1:3d:38 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9f5d3eac-e68c-4a0e-8679-0880a0c51bab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61aead9f-19ea-477e-b1cf-20f3fec72d79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0363243e85d429c956681904cf9714d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be934c85-b635-4553-97f2-e134629b726f e14909a1-3afd-4652-b1d9-0e53b8dc4567', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03b146ad-089c-4a5e-8793-a1df4c7b2b23, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=bb1981a1-d5bc-4236-97ff-2763b967de6c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:45:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:01.026 105943 INFO neutron.agent.ovn.metadata.agent [-] Port bb1981a1-d5bc-4236-97ff-2763b967de6c in datapath 61aead9f-19ea-477e-b1cf-20f3fec72d79 unbound from our chassis
Oct 02 19:45:01 compute-0 ovn_controller[97052]: 2025-10-02T19:45:01Z|00160|binding|INFO|Releasing lport aaa6ea3c-0164-44d4-b435-0c6c04e73e3f from this chassis (sb_readonly=0)
Oct 02 19:45:01 compute-0 ovn_controller[97052]: 2025-10-02T19:45:01Z|00161|binding|INFO|Releasing lport 0e132986-681b-4e69-9066-5d6f6dd06694 from this chassis (sb_readonly=0)
Oct 02 19:45:01 compute-0 ovn_controller[97052]: 2025-10-02T19:45:01Z|00162|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:45:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:01.028 105943 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 61aead9f-19ea-477e-b1cf-20f3fec72d79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 19:45:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:01.029 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[600d5e9b-bff0-4c15-8023-4e4ae5d6e10e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:01.031 105943 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79 namespace which is not needed anymore
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:01 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 02 19:45:01 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 34.827s CPU time.
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:01 compute-0 systemd-machined[154795]: Machine qemu-13-instance-0000000c terminated.
Oct 02 19:45:01 compute-0 podman[260929]: 2025-10-02 19:45:01.112759701 +0000 UTC m=+0.128760784 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.205 2 INFO nova.virt.libvirt.driver [-] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Instance destroyed successfully.
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.205 2 DEBUG nova.objects.instance [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lazy-loading 'resources' on Instance uuid 9f5d3eac-e68c-4a0e-8679-0880a0c51bab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:45:01 compute-0 anacron[98887]: Job `cron.daily' started
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.242 2 DEBUG nova.virt.libvirt.vif [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:44:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1972367434',display_name='tempest-TestServerBasicOps-server-1972367434',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1972367434',id=12,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF2ZtyfUubtp8cheeFyoIba9G5o+ZW6wuKNTuSzPdheIihIAcfNRwabevQg8r7wCcTt89oafysBrW1H/16794EDH2Pe1JdvkSavQZRaYm7HhE4A4CEuh2libnTsyYV87Gw==',key_name='tempest-TestServerBasicOps-1578654810',keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:44:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a0363243e85d429c956681904cf9714d',ramdisk_id='',reservation_id='r-fmy0wsat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1707036300',owner_user_name='tempest-TestServerBasicOps-1707036300-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:44:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6477d2ef96bd4c318dea2a18da231121',uuid=9f5d3eac-e68c-4a0e-8679-0880a0c51bab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "address": "fa:16:3e:b1:3d:38", "network": {"id": "61aead9f-19ea-477e-b1cf-20f3fec72d79", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2111893398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0363243e85d429c956681904cf9714d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1981a1-d5", "ovs_interfaceid": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.242 2 DEBUG nova.network.os_vif_util [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Converting VIF {"id": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "address": "fa:16:3e:b1:3d:38", "network": {"id": "61aead9f-19ea-477e-b1cf-20f3fec72d79", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2111893398-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0363243e85d429c956681904cf9714d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1981a1-d5", "ovs_interfaceid": "bb1981a1-d5bc-4236-97ff-2763b967de6c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.243 2 DEBUG nova.network.os_vif_util [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b1:3d:38,bridge_name='br-int',has_traffic_filtering=True,id=bb1981a1-d5bc-4236-97ff-2763b967de6c,network=Network(61aead9f-19ea-477e-b1cf-20f3fec72d79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb1981a1-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.243 2 DEBUG os_vif [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:3d:38,bridge_name='br-int',has_traffic_filtering=True,id=bb1981a1-d5bc-4236-97ff-2763b967de6c,network=Network(61aead9f-19ea-477e-b1cf-20f3fec72d79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb1981a1-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.245 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb1981a1-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:01 compute-0 anacron[98887]: Job `cron.daily' terminated
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.254 2 INFO os_vif [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:3d:38,bridge_name='br-int',has_traffic_filtering=True,id=bb1981a1-d5bc-4236-97ff-2763b967de6c,network=Network(61aead9f-19ea-477e-b1cf-20f3fec72d79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb1981a1-d5')
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.254 2 INFO nova.virt.libvirt.driver [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Deleting instance files /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab_del
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.255 2 INFO nova.virt.libvirt.driver [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Deletion of /var/lib/nova/instances/9f5d3eac-e68c-4a0e-8679-0880a0c51bab_del complete
Oct 02 19:45:01 compute-0 neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79[260509]: [NOTICE]   (260513) : haproxy version is 2.8.14-c23fe91
Oct 02 19:45:01 compute-0 neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79[260509]: [NOTICE]   (260513) : path to executable is /usr/sbin/haproxy
Oct 02 19:45:01 compute-0 neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79[260509]: [WARNING]  (260513) : Exiting Master process...
Oct 02 19:45:01 compute-0 neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79[260509]: [WARNING]  (260513) : Exiting Master process...
Oct 02 19:45:01 compute-0 neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79[260509]: [ALERT]    (260513) : Current worker (260515) exited with code 143 (Terminated)
Oct 02 19:45:01 compute-0 neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79[260509]: [WARNING]  (260513) : All workers exited. Exiting... (0)
Oct 02 19:45:01 compute-0 systemd[1]: libpod-0807ccf170836ffdd201c8ee96eca9eda0e9aad7e5092acdd541b6f9949f1121.scope: Deactivated successfully.
Oct 02 19:45:01 compute-0 podman[260979]: 2025-10-02 19:45:01.271635957 +0000 UTC m=+0.088576253 container died 0807ccf170836ffdd201c8ee96eca9eda0e9aad7e5092acdd541b6f9949f1121 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:45:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0807ccf170836ffdd201c8ee96eca9eda0e9aad7e5092acdd541b6f9949f1121-userdata-shm.mount: Deactivated successfully.
Oct 02 19:45:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-19398ef839772b47ec29fddbb4af540c753880e7e5f34e9287f1d2227d4737d9-merged.mount: Deactivated successfully.
Oct 02 19:45:01 compute-0 podman[260979]: 2025-10-02 19:45:01.33550018 +0000 UTC m=+0.152440476 container cleanup 0807ccf170836ffdd201c8ee96eca9eda0e9aad7e5092acdd541b6f9949f1121 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 19:45:01 compute-0 systemd[1]: libpod-conmon-0807ccf170836ffdd201c8ee96eca9eda0e9aad7e5092acdd541b6f9949f1121.scope: Deactivated successfully.
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.378 2 INFO nova.compute.manager [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Took 0.46 seconds to destroy the instance on the hypervisor.
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.379 2 DEBUG oslo.service.loopingcall [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.380 2 DEBUG nova.compute.manager [-] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.380 2 DEBUG nova.network.neutron [-] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:45:01 compute-0 openstack_network_exporter[211160]: ERROR   19:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:45:01 compute-0 podman[261019]: 2025-10-02 19:45:01.417769884 +0000 UTC m=+0.055962683 container remove 0807ccf170836ffdd201c8ee96eca9eda0e9aad7e5092acdd541b6f9949f1121 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:45:01 compute-0 openstack_network_exporter[211160]: ERROR   19:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:45:01 compute-0 openstack_network_exporter[211160]: ERROR   19:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:45:01 compute-0 openstack_network_exporter[211160]: ERROR   19:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:45:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:45:01 compute-0 openstack_network_exporter[211160]: ERROR   19:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:45:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:45:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:01.434 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[f4da065d-f97f-442b-95ec-b62457e7ffcb]: (4, ('Thu Oct  2 07:45:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79 (0807ccf170836ffdd201c8ee96eca9eda0e9aad7e5092acdd541b6f9949f1121)\n0807ccf170836ffdd201c8ee96eca9eda0e9aad7e5092acdd541b6f9949f1121\nThu Oct  2 07:45:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79 (0807ccf170836ffdd201c8ee96eca9eda0e9aad7e5092acdd541b6f9949f1121)\n0807ccf170836ffdd201c8ee96eca9eda0e9aad7e5092acdd541b6f9949f1121\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:01.439 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[690fa1b8-5213-45c4-804c-0a965127c8c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:01.440 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61aead9f-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:01 compute-0 kernel: tap61aead9f-10: left promiscuous mode
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:01 compute-0 nova_compute[194781]: 2025-10-02 19:45:01.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:01.472 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[0bfd6961-9348-4f34-a00d-fe81dc74c7f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:01.497 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ad413b-d38c-4af0-80cf-448d25818638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:01.499 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[084078f1-0d8c-4cbe-8cfc-182fa3752799]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:01.519 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[2581e652-3b87-4592-a8be-1284ddd34598]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541793, 'reachable_time': 20506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261035, 'error': None, 'target': 'ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d61aead9f\x2d19ea\x2d477e\x2db1cf\x2d20f3fec72d79.mount: Deactivated successfully.
Oct 02 19:45:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:01.524 106060 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-61aead9f-19ea-477e-b1cf-20f3fec72d79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 19:45:01 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:01.524 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[686d27dd-eec1-4fe5-bd18-49eafcdd01ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.192 2 DEBUG nova.network.neutron [-] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.295 2 INFO nova.compute.manager [-] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Took 1.92 seconds to deallocate network for instance.
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.380 2 DEBUG oslo_concurrency.lockutils [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.381 2 DEBUG oslo_concurrency.lockutils [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.512 2 DEBUG nova.compute.provider_tree [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.541 2 DEBUG nova.scheduler.client.report [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.577 2 DEBUG oslo_concurrency.lockutils [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.600 2 INFO nova.scheduler.client.report [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Deleted allocations for instance 9f5d3eac-e68c-4a0e-8679-0880a0c51bab
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.685 2 DEBUG oslo_concurrency.lockutils [None req-eff1be56-2b60-4455-9d0d-98a8e47ade0e 6477d2ef96bd4c318dea2a18da231121 a0363243e85d429c956681904cf9714d - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.879 2 DEBUG nova.compute.manager [req-5dfaec96-d288-4d83-9613-264716024a32 req-12ce325b-aee7-4fcb-b11d-8edc4d4f54c8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Received event network-vif-unplugged-bb1981a1-d5bc-4236-97ff-2763b967de6c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.879 2 DEBUG oslo_concurrency.lockutils [req-5dfaec96-d288-4d83-9613-264716024a32 req-12ce325b-aee7-4fcb-b11d-8edc4d4f54c8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.879 2 DEBUG oslo_concurrency.lockutils [req-5dfaec96-d288-4d83-9613-264716024a32 req-12ce325b-aee7-4fcb-b11d-8edc4d4f54c8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.879 2 DEBUG oslo_concurrency.lockutils [req-5dfaec96-d288-4d83-9613-264716024a32 req-12ce325b-aee7-4fcb-b11d-8edc4d4f54c8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.880 2 DEBUG nova.compute.manager [req-5dfaec96-d288-4d83-9613-264716024a32 req-12ce325b-aee7-4fcb-b11d-8edc4d4f54c8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] No waiting events found dispatching network-vif-unplugged-bb1981a1-d5bc-4236-97ff-2763b967de6c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:45:03 compute-0 nova_compute[194781]: 2025-10-02 19:45:03.880 2 WARNING nova.compute.manager [req-5dfaec96-d288-4d83-9613-264716024a32 req-12ce325b-aee7-4fcb-b11d-8edc4d4f54c8 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Received unexpected event network-vif-unplugged-bb1981a1-d5bc-4236-97ff-2763b967de6c for instance with vm_state deleted and task_state None.
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.235 2 DEBUG nova.compute.manager [req-7340eb0a-ebad-4cac-8af6-34f10e191c0e req-6f559967-fea5-4591-b4fa-565c0a9f89b5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Received event network-vif-deleted-bb1981a1-d5bc-4236-97ff-2763b967de6c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.236 2 DEBUG nova.compute.manager [req-7340eb0a-ebad-4cac-8af6-34f10e191c0e req-6f559967-fea5-4591-b4fa-565c0a9f89b5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Received event network-vif-plugged-bb1981a1-d5bc-4236-97ff-2763b967de6c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.236 2 DEBUG oslo_concurrency.lockutils [req-7340eb0a-ebad-4cac-8af6-34f10e191c0e req-6f559967-fea5-4591-b4fa-565c0a9f89b5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.236 2 DEBUG oslo_concurrency.lockutils [req-7340eb0a-ebad-4cac-8af6-34f10e191c0e req-6f559967-fea5-4591-b4fa-565c0a9f89b5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.236 2 DEBUG oslo_concurrency.lockutils [req-7340eb0a-ebad-4cac-8af6-34f10e191c0e req-6f559967-fea5-4591-b4fa-565c0a9f89b5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "9f5d3eac-e68c-4a0e-8679-0880a0c51bab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.237 2 DEBUG nova.compute.manager [req-7340eb0a-ebad-4cac-8af6-34f10e191c0e req-6f559967-fea5-4591-b4fa-565c0a9f89b5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] No waiting events found dispatching network-vif-plugged-bb1981a1-d5bc-4236-97ff-2763b967de6c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.237 2 WARNING nova.compute.manager [req-7340eb0a-ebad-4cac-8af6-34f10e191c0e req-6f559967-fea5-4591-b4fa-565c0a9f89b5 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Received unexpected event network-vif-plugged-bb1981a1-d5bc-4236-97ff-2763b967de6c for instance with vm_state deleted and task_state None.
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.645 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.646 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.668 2 DEBUG nova.compute.manager [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.739 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.740 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.748 2 DEBUG nova.virt.hardware [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:45:06 compute-0 nova_compute[194781]: 2025-10-02 19:45:06.749 2 INFO nova.compute.claims [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.032 2 DEBUG nova.compute.provider_tree [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.048 2 DEBUG nova.scheduler.client.report [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.071 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.072 2 DEBUG nova.compute.manager [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.130 2 DEBUG nova.compute.manager [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.130 2 DEBUG nova.network.neutron [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.149 2 INFO nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.175 2 DEBUG nova.compute.manager [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.273 2 DEBUG nova.compute.manager [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.275 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.275 2 INFO nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Creating image(s)
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.276 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "/var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.276 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "/var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.277 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "/var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.294 2 DEBUG oslo_concurrency.processutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.360 2 DEBUG oslo_concurrency.processutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.361 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "a9843d922d50b317c389e448cbaaf7849a9d0409" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.362 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.380 2 DEBUG oslo_concurrency.processutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.485 2 DEBUG oslo_concurrency.processutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.486 2 DEBUG oslo_concurrency.processutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.535 2 DEBUG oslo_concurrency.processutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.536 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.537 2 DEBUG oslo_concurrency.processutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.607 2 DEBUG oslo_concurrency.processutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.609 2 DEBUG nova.virt.disk.api [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Checking if we can resize image /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.610 2 DEBUG oslo_concurrency.processutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.710 2 DEBUG oslo_concurrency.processutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.712 2 DEBUG nova.virt.disk.api [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Cannot resize image /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.713 2 DEBUG nova.objects.instance [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lazy-loading 'migration_context' on Instance uuid 77c85795-42d5-4ba9-bbb5-b7009b5f992f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.735 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.736 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Ensure instance console log exists: /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.737 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.738 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:07 compute-0 nova_compute[194781]: 2025-10-02 19:45:07.739 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:08 compute-0 nova_compute[194781]: 2025-10-02 19:45:08.268 2 DEBUG nova.policy [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4ebdfb48323c4124b435387dfed92c5e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ed4915cd456424c8ac561ce0da33795', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 19:45:09 compute-0 nova_compute[194781]: 2025-10-02 19:45:09.364 2 DEBUG nova.network.neutron [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Successfully created port: 603f706b-6b06-4ad2-b22b-b118c9d68755 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 19:45:10 compute-0 ovn_controller[97052]: 2025-10-02T19:45:10Z|00163|binding|INFO|Releasing lport aaa6ea3c-0164-44d4-b435-0c6c04e73e3f from this chassis (sb_readonly=0)
Oct 02 19:45:10 compute-0 ovn_controller[97052]: 2025-10-02T19:45:10Z|00164|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:45:10 compute-0 nova_compute[194781]: 2025-10-02 19:45:10.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:11 compute-0 nova_compute[194781]: 2025-10-02 19:45:11.036 2 DEBUG nova.network.neutron [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Successfully updated port: 603f706b-6b06-4ad2-b22b-b118c9d68755 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:45:11 compute-0 nova_compute[194781]: 2025-10-02 19:45:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:11 compute-0 nova_compute[194781]: 2025-10-02 19:45:11.057 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "refresh_cache-77c85795-42d5-4ba9-bbb5-b7009b5f992f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:45:11 compute-0 nova_compute[194781]: 2025-10-02 19:45:11.057 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquired lock "refresh_cache-77c85795-42d5-4ba9-bbb5-b7009b5f992f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:45:11 compute-0 nova_compute[194781]: 2025-10-02 19:45:11.057 2 DEBUG nova.network.neutron [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:45:11 compute-0 nova_compute[194781]: 2025-10-02 19:45:11.129 2 DEBUG nova.compute.manager [req-3c4e9c8f-d126-4738-8b65-195a2bb512eb req-fc2d89a8-f614-4c5e-8f44-ef624a59b7dd fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Received event network-changed-603f706b-6b06-4ad2-b22b-b118c9d68755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:45:11 compute-0 nova_compute[194781]: 2025-10-02 19:45:11.129 2 DEBUG nova.compute.manager [req-3c4e9c8f-d126-4738-8b65-195a2bb512eb req-fc2d89a8-f614-4c5e-8f44-ef624a59b7dd fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Refreshing instance network info cache due to event network-changed-603f706b-6b06-4ad2-b22b-b118c9d68755. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:45:11 compute-0 nova_compute[194781]: 2025-10-02 19:45:11.130 2 DEBUG oslo_concurrency.lockutils [req-3c4e9c8f-d126-4738-8b65-195a2bb512eb req-fc2d89a8-f614-4c5e-8f44-ef624a59b7dd fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-77c85795-42d5-4ba9-bbb5-b7009b5f992f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:45:11 compute-0 nova_compute[194781]: 2025-10-02 19:45:11.242 2 DEBUG nova.network.neutron [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:45:11 compute-0 nova_compute[194781]: 2025-10-02 19:45:11.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:11 compute-0 podman[261052]: 2025-10-02 19:45:11.702048115 +0000 UTC m=+0.081307318 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:45:11 compute-0 podman[261053]: 2025-10-02 19:45:11.719957692 +0000 UTC m=+0.084431571 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.203 2 DEBUG nova.network.neutron [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Updating instance_info_cache with network_info: [{"id": "603f706b-6b06-4ad2-b22b-b118c9d68755", "address": "fa:16:3e:72:b6:fc", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap603f706b-6b", "ovs_interfaceid": "603f706b-6b06-4ad2-b22b-b118c9d68755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.244 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Releasing lock "refresh_cache-77c85795-42d5-4ba9-bbb5-b7009b5f992f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.245 2 DEBUG nova.compute.manager [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Instance network_info: |[{"id": "603f706b-6b06-4ad2-b22b-b118c9d68755", "address": "fa:16:3e:72:b6:fc", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap603f706b-6b", "ovs_interfaceid": "603f706b-6b06-4ad2-b22b-b118c9d68755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.246 2 DEBUG oslo_concurrency.lockutils [req-3c4e9c8f-d126-4738-8b65-195a2bb512eb req-fc2d89a8-f614-4c5e-8f44-ef624a59b7dd fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-77c85795-42d5-4ba9-bbb5-b7009b5f992f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.247 2 DEBUG nova.network.neutron [req-3c4e9c8f-d126-4738-8b65-195a2bb512eb req-fc2d89a8-f614-4c5e-8f44-ef624a59b7dd fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Refreshing network info cache for port 603f706b-6b06-4ad2-b22b-b118c9d68755 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.252 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Start _get_guest_xml network_info=[{"id": "603f706b-6b06-4ad2-b22b-b118c9d68755", "address": "fa:16:3e:72:b6:fc", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap603f706b-6b", "ovs_interfaceid": "603f706b-6b06-4ad2-b22b-b118c9d68755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': 'c191839f-7364-41ce-80c8-eff8077fc750'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.262 2 WARNING nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.280 2 DEBUG nova.virt.libvirt.host [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.282 2 DEBUG nova.virt.libvirt.host [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.289 2 DEBUG nova.virt.libvirt.host [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.289 2 DEBUG nova.virt.libvirt.host [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.290 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.290 2 DEBUG nova.virt.hardware [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:40:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7ab5ea96-81dd-4496-8a1f-012f7d2c53c5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.291 2 DEBUG nova.virt.hardware [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.291 2 DEBUG nova.virt.hardware [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.292 2 DEBUG nova.virt.hardware [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.292 2 DEBUG nova.virt.hardware [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.292 2 DEBUG nova.virt.hardware [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.293 2 DEBUG nova.virt.hardware [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.293 2 DEBUG nova.virt.hardware [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.293 2 DEBUG nova.virt.hardware [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.294 2 DEBUG nova.virt.hardware [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.294 2 DEBUG nova.virt.hardware [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.299 2 DEBUG nova.virt.libvirt.vif [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:45:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1268837722',display_name='tempest-TestNetworkBasicOps-server-1268837722',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1268837722',id=13,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDrc+P0gsZQh0uGJUnr3zYP2O1zW9xC5+fi4i/ADlGpzcztyBgA5/6BS7XO85nY74cc89ZtOchpc4l7DeCBBR4+8aE6DrVwzE9zO6adBQFT2VqIAiIf8DphwMa6Q/KJOlg==',key_name='tempest-TestNetworkBasicOps-1819364670',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ed4915cd456424c8ac561ce0da33795',ramdisk_id='',reservation_id='r-6c4geriv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1499067436',owner_user_name='tempest-TestNetworkBasicOps-1499067436-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:45:07Z,user_data=None,user_id='4ebdfb48323c4124b435387dfed92c5e',uuid=77c85795-42d5-4ba9-bbb5-b7009b5f992f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "603f706b-6b06-4ad2-b22b-b118c9d68755", "address": "fa:16:3e:72:b6:fc", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap603f706b-6b", "ovs_interfaceid": "603f706b-6b06-4ad2-b22b-b118c9d68755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.300 2 DEBUG nova.network.os_vif_util [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Converting VIF {"id": "603f706b-6b06-4ad2-b22b-b118c9d68755", "address": "fa:16:3e:72:b6:fc", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap603f706b-6b", "ovs_interfaceid": "603f706b-6b06-4ad2-b22b-b118c9d68755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.301 2 DEBUG nova.network.os_vif_util [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:b6:fc,bridge_name='br-int',has_traffic_filtering=True,id=603f706b-6b06-4ad2-b22b-b118c9d68755,network=Network(2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap603f706b-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.302 2 DEBUG nova.objects.instance [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77c85795-42d5-4ba9-bbb5-b7009b5f992f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.322 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:45:12 compute-0 nova_compute[194781]:   <uuid>77c85795-42d5-4ba9-bbb5-b7009b5f992f</uuid>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   <name>instance-0000000d</name>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   <memory>131072</memory>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <nova:name>tempest-TestNetworkBasicOps-server-1268837722</nova:name>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:45:12</nova:creationTime>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <nova:flavor name="m1.nano">
Oct 02 19:45:12 compute-0 nova_compute[194781]:         <nova:memory>128</nova:memory>
Oct 02 19:45:12 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:45:12 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:45:12 compute-0 nova_compute[194781]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 19:45:12 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:45:12 compute-0 nova_compute[194781]:         <nova:user uuid="4ebdfb48323c4124b435387dfed92c5e">tempest-TestNetworkBasicOps-1499067436-project-member</nova:user>
Oct 02 19:45:12 compute-0 nova_compute[194781]:         <nova:project uuid="4ed4915cd456424c8ac561ce0da33795">tempest-TestNetworkBasicOps-1499067436</nova:project>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="c191839f-7364-41ce-80c8-eff8077fc750"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:45:12 compute-0 nova_compute[194781]:         <nova:port uuid="603f706b-6b06-4ad2-b22b-b118c9d68755">
Oct 02 19:45:12 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <system>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <entry name="serial">77c85795-42d5-4ba9-bbb5-b7009b5f992f</entry>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <entry name="uuid">77c85795-42d5-4ba9-bbb5-b7009b5f992f</entry>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     </system>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   <os>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   </os>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   <features>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   </features>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.config"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:72:b6:fc"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <target dev="tap603f706b-6b"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/console.log" append="off"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <video>
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     </video>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:45:12 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:45:12 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:45:12 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:45:12 compute-0 nova_compute[194781]: </domain>
Oct 02 19:45:12 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.324 2 DEBUG nova.compute.manager [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Preparing to wait for external event network-vif-plugged-603f706b-6b06-4ad2-b22b-b118c9d68755 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.324 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.324 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.325 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.326 2 DEBUG nova.virt.libvirt.vif [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:45:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1268837722',display_name='tempest-TestNetworkBasicOps-server-1268837722',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1268837722',id=13,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDrc+P0gsZQh0uGJUnr3zYP2O1zW9xC5+fi4i/ADlGpzcztyBgA5/6BS7XO85nY74cc89ZtOchpc4l7DeCBBR4+8aE6DrVwzE9zO6adBQFT2VqIAiIf8DphwMa6Q/KJOlg==',key_name='tempest-TestNetworkBasicOps-1819364670',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ed4915cd456424c8ac561ce0da33795',ramdisk_id='',reservation_id='r-6c4geriv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1499067436',owner_user_name='tempest-TestNetworkBasicOps-1499067436-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:45:07Z,user_data=None,user_id='4ebdfb48323c4124b435387dfed92c5e',uuid=77c85795-42d5-4ba9-bbb5-b7009b5f992f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "603f706b-6b06-4ad2-b22b-b118c9d68755", "address": "fa:16:3e:72:b6:fc", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap603f706b-6b", "ovs_interfaceid": "603f706b-6b06-4ad2-b22b-b118c9d68755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.326 2 DEBUG nova.network.os_vif_util [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Converting VIF {"id": "603f706b-6b06-4ad2-b22b-b118c9d68755", "address": "fa:16:3e:72:b6:fc", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap603f706b-6b", "ovs_interfaceid": "603f706b-6b06-4ad2-b22b-b118c9d68755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.327 2 DEBUG nova.network.os_vif_util [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:b6:fc,bridge_name='br-int',has_traffic_filtering=True,id=603f706b-6b06-4ad2-b22b-b118c9d68755,network=Network(2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap603f706b-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.327 2 DEBUG os_vif [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:b6:fc,bridge_name='br-int',has_traffic_filtering=True,id=603f706b-6b06-4ad2-b22b-b118c9d68755,network=Network(2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap603f706b-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap603f706b-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap603f706b-6b, col_values=(('external_ids', {'iface-id': '603f706b-6b06-4ad2-b22b-b118c9d68755', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:b6:fc', 'vm-uuid': '77c85795-42d5-4ba9-bbb5-b7009b5f992f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:12 compute-0 NetworkManager[52324]: <info>  [1759434312.3362] manager: (tap603f706b-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.347 2 INFO os_vif [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:b6:fc,bridge_name='br-int',has_traffic_filtering=True,id=603f706b-6b06-4ad2-b22b-b118c9d68755,network=Network(2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap603f706b-6b')
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.422 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.423 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.423 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] No VIF found with MAC fa:16:3e:72:b6:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.424 2 INFO nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Using config drive
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.947 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.947 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fceade0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.953 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 77c85795-42d5-4ba9-bbb5-b7009b5f992f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 19:45:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:12.955 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/77c85795-42d5-4ba9-bbb5-b7009b5f992f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}7d00fd7b3129404772d7b3eeaef94222e4d12fdb730378deac028178d031ce80" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.979 2 INFO nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Creating config drive at /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.config
Oct 02 19:45:12 compute-0 nova_compute[194781]: 2025-10-02 19:45:12.983 2 DEBUG oslo_concurrency.processutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwpedyebt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:13 compute-0 nova_compute[194781]: 2025-10-02 19:45:13.128 2 DEBUG oslo_concurrency.processutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwpedyebt" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:13 compute-0 kernel: tap603f706b-6b: entered promiscuous mode
Oct 02 19:45:13 compute-0 ovn_controller[97052]: 2025-10-02T19:45:13Z|00165|binding|INFO|Claiming lport 603f706b-6b06-4ad2-b22b-b118c9d68755 for this chassis.
Oct 02 19:45:13 compute-0 ovn_controller[97052]: 2025-10-02T19:45:13Z|00166|binding|INFO|603f706b-6b06-4ad2-b22b-b118c9d68755: Claiming fa:16:3e:72:b6:fc 10.100.0.11
Oct 02 19:45:13 compute-0 NetworkManager[52324]: <info>  [1759434313.2165] manager: (tap603f706b-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Oct 02 19:45:13 compute-0 nova_compute[194781]: 2025-10-02 19:45:13.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:13 compute-0 ovn_controller[97052]: 2025-10-02T19:45:13Z|00167|binding|INFO|Setting lport 603f706b-6b06-4ad2-b22b-b118c9d68755 ovn-installed in OVS
Oct 02 19:45:13 compute-0 nova_compute[194781]: 2025-10-02 19:45:13.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.238 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:b6:fc 10.100.0.11'], port_security=['fa:16:3e:72:b6:fc 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '77c85795-42d5-4ba9-bbb5-b7009b5f992f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ed4915cd456424c8ac561ce0da33795', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb0a552f-0bf7-41d1-8336-c4db68805f5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4d2301e-c986-4618-9fd9-f3243fb030c9, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=603f706b-6b06-4ad2-b22b-b118c9d68755) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:45:13 compute-0 nova_compute[194781]: 2025-10-02 19:45:13.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.239 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 603f706b-6b06-4ad2-b22b-b118c9d68755 in datapath 2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8 bound to our chassis
Oct 02 19:45:13 compute-0 ovn_controller[97052]: 2025-10-02T19:45:13Z|00168|binding|INFO|Setting lport 603f706b-6b06-4ad2-b22b-b118c9d68755 up in Southbound
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.241 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.259 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[a6247771-99fc-4d31-a4b7-6a7578913d26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.259 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2c6f59f2-a1 in ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.262 246899 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2c6f59f2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.262 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[61f49edd-c4d4-49a5-a090-67b3cedbe532]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.263 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[49e18e12-c0e8-4564-aa5e-37a78f20cdfb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.278 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[7e781c32-9039-48e1-b3b9-a7f74b76fb4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 systemd-machined[154795]: New machine qemu-14-instance-0000000d.
Oct 02 19:45:13 compute-0 systemd-udevd[261112]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:45:13 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Oct 02 19:45:13 compute-0 NetworkManager[52324]: <info>  [1759434313.3020] device (tap603f706b-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:45:13 compute-0 NetworkManager[52324]: <info>  [1759434313.3046] device (tap603f706b-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.304 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[d5cf28c0-7f58-4168-9c93-613b37c653c3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.332 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[7f03d21b-ff90-4246-9906-a0b6f82a52af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 NetworkManager[52324]: <info>  [1759434313.3383] manager: (tap2c6f59f2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/70)
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.339 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[def1483c-c67a-493f-bccb-e9875dce36f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.379 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b7021f-b5e1-4004-9300-6c44332da56a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.382 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[16dfc388-07b9-496f-a5a0-08295c25d996]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 NetworkManager[52324]: <info>  [1759434313.4093] device (tap2c6f59f2-a0): carrier: link connected
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.417 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[c7803e1b-8d56-4aa0-a2c6-ba2d1505d273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.434 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[b43048ba-97c0-40b5-a678-3e32e1f64d0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c6f59f2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:9e:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547403, 'reachable_time': 34196, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261142, 'error': None, 'target': 'ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.449 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[e57af4d5-3d3d-4367-a3a3-8219c7c086e1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:9ec8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547403, 'tstamp': 547403}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261143, 'error': None, 'target': 'ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.464 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[28deafcb-e135-4f10-847e-7d1a463c1c0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c6f59f2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:9e:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547403, 'reachable_time': 34196, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261144, 'error': None, 'target': 'ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.501 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[bb946ca8-d666-479c-b98e-67529c18320e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.569 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[678f3c1d-413f-4d2d-8a00-d2166a6478ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.571 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c6f59f2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.572 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.572 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c6f59f2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:13 compute-0 kernel: tap2c6f59f2-a0: entered promiscuous mode
Oct 02 19:45:13 compute-0 NetworkManager[52324]: <info>  [1759434313.5763] manager: (tap2c6f59f2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Oct 02 19:45:13 compute-0 nova_compute[194781]: 2025-10-02 19:45:13.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.582 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c6f59f2-a0, col_values=(('external_ids', {'iface-id': 'fb07e353-d679-475b-a1f5-b73dcea986a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:13 compute-0 ovn_controller[97052]: 2025-10-02T19:45:13Z|00169|binding|INFO|Releasing lport fb07e353-d679-475b-a1f5-b73dcea986a1 from this chassis (sb_readonly=0)
Oct 02 19:45:13 compute-0 nova_compute[194781]: 2025-10-02 19:45:13.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.587 105943 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.596 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[097f9483-dc3a-4227-8db9-350a0e4ed87f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.598 105943 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: global
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     log         /dev/log local0 debug
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     log-tag     haproxy-metadata-proxy-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     user        root
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     group       root
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     maxconn     1024
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     pidfile     /var/lib/neutron/external/pids/2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8.pid.haproxy
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     daemon
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: defaults
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     log global
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     mode http
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     option httplog
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     option dontlognull
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     option http-server-close
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     option forwardfor
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     retries                 3
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     timeout http-request    30s
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     timeout connect         30s
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     timeout client          32s
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     timeout server          32s
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     timeout http-keep-alive 30s
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: listen listener
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     bind 169.254.169.254:80
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:     http-request add-header X-OVN-Network-ID 2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 19:45:13 compute-0 nova_compute[194781]: 2025-10-02 19:45:13.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:13.603 105943 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'env', 'PROCESS_TAG=haproxy-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 19:45:14 compute-0 podman[261180]: 2025-10-02 19:45:14.027798789 +0000 UTC m=+0.066780081 container create 5f53fc8b8974a38d9a21e854d19865f1be5d638c16b839a1102cbf42e990baee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 19:45:14 compute-0 systemd[1]: Started libpod-conmon-5f53fc8b8974a38d9a21e854d19865f1be5d638c16b839a1102cbf42e990baee.scope.
Oct 02 19:45:14 compute-0 podman[261180]: 2025-10-02 19:45:13.989989541 +0000 UTC m=+0.028970843 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 19:45:14 compute-0 systemd[1]: Started libcrun container.
Oct 02 19:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b452ef326dbb0051b42ecf4a020799b74a9f474e60521b4b58b79ddec358c38/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 19:45:14 compute-0 podman[261180]: 2025-10-02 19:45:14.129410639 +0000 UTC m=+0.168391951 container init 5f53fc8b8974a38d9a21e854d19865f1be5d638c16b839a1102cbf42e990baee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 19:45:14 compute-0 podman[261180]: 2025-10-02 19:45:14.137293919 +0000 UTC m=+0.176275211 container start 5f53fc8b8974a38d9a21e854d19865f1be5d638c16b839a1102cbf42e990baee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 19:45:14 compute-0 podman[261195]: 2025-10-02 19:45:14.146067933 +0000 UTC m=+0.080757924 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, architecture=x86_64, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543)
Oct 02 19:45:14 compute-0 podman[261192]: 2025-10-02 19:45:14.146408272 +0000 UTC m=+0.086382764 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Oct 02 19:45:14 compute-0 neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8[261214]: [NOTICE]   (261250) : New worker (261252) forked
Oct 02 19:45:14 compute-0 neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8[261214]: [NOTICE]   (261250) : Loading success.
Oct 02 19:45:14 compute-0 podman[261196]: 2025-10-02 19:45:14.170103504 +0000 UTC m=+0.104777785 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:45:14 compute-0 nova_compute[194781]: 2025-10-02 19:45:14.439 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434314.4386637, 77c85795-42d5-4ba9-bbb5-b7009b5f992f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:45:14 compute-0 nova_compute[194781]: 2025-10-02 19:45:14.441 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] VM Started (Lifecycle Event)
Oct 02 19:45:14 compute-0 nova_compute[194781]: 2025-10-02 19:45:14.478 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:45:14 compute-0 nova_compute[194781]: 2025-10-02 19:45:14.485 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434314.4387906, 77c85795-42d5-4ba9-bbb5-b7009b5f992f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:45:14 compute-0 nova_compute[194781]: 2025-10-02 19:45:14.486 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] VM Paused (Lifecycle Event)
Oct 02 19:45:14 compute-0 nova_compute[194781]: 2025-10-02 19:45:14.540 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:45:14 compute-0 nova_compute[194781]: 2025-10-02 19:45:14.547 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:45:14 compute-0 nova_compute[194781]: 2025-10-02 19:45:14.607 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.201 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759434301.2006445, 9f5d3eac-e68c-4a0e-8679-0880a0c51bab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.203 2 INFO nova.compute.manager [-] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] VM Stopped (Lifecycle Event)
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.234 2 DEBUG nova.compute.manager [None req-6493cd7f-1948-48d8-a013-7d825e7dbe67 - - - - - -] [instance: 9f5d3eac-e68c-4a0e-8679-0880a0c51bab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.280 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1681 Content-Type: application/json Date: Thu, 02 Oct 2025 19:45:12 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-fb674848-3b31-42da-b987-7d2e55f2edaf x-openstack-request-id: req-fb674848-3b31-42da-b987-7d2e55f2edaf _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.281 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "77c85795-42d5-4ba9-bbb5-b7009b5f992f", "name": "tempest-TestNetworkBasicOps-server-1268837722", "status": "BUILD", "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "user_id": "4ebdfb48323c4124b435387dfed92c5e", "metadata": {}, "hostId": "1dd739aef81e577b6434a864c1fae4d6951c17aeabfeb1942c947911", "image": {"id": "c191839f-7364-41ce-80c8-eff8077fc750", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/c191839f-7364-41ce-80c8-eff8077fc750"}]}, "flavor": {"id": "7ab5ea96-81dd-4496-8a1f-012f7d2c53c5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/7ab5ea96-81dd-4496-8a1f-012f7d2c53c5"}]}, "created": "2025-10-02T19:45:04Z", "updated": "2025-10-02T19:45:07Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/77c85795-42d5-4ba9-bbb5-b7009b5f992f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/77c85795-42d5-4ba9-bbb5-b7009b5f992f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "key_name": "tempest-TestNetworkBasicOps-1819364670", "OS-SRV-USG:launched_at": null, "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-487265812"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "spawning", "OS-EXT-STS:vm_state": "building", "OS-EXT-STS:power_state": 0, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.281 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/77c85795-42d5-4ba9-bbb5-b7009b5f992f used request id req-fb674848-3b31-42da-b987-7d2e55f2edaf request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.283 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '77c85795-42d5-4ba9-bbb5-b7009b5f992f', 'name': 'tempest-TestNetworkBasicOps-server-1268837722', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c191839f-7364-41ce-80c8-eff8077fc750'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'paused', 'tenant_id': '4ed4915cd456424c8ac561ce0da33795', 'user_id': '4ebdfb48323c4124b435387dfed92c5e', 'hostId': '1dd739aef81e577b6434a864c1fae4d6951c17aeabfeb1942c947911', 'status': 'paused', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.286 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f0ac40ea-f3c9-4981-ba99-bfbf34bd253a', 'name': 'te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b43dc593-d176-449d-a8d5-95d53b8e1b5e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3dae65399d7c47999282bff6664f6d16', 'user_id': '23b5415980f24bbbbfa331c702f6f7d9', 'hostId': '298cf1af4dee135a9d0b3050937724c6c926b466f9f6516cf98c662a', 'status': 'active', 'metadata': {'metering.server_group': 'd4713e41-6620-49a4-8665-1b2fbe664d9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.289 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.289 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.289 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.290 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.290 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:45:16.290227) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.315 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/cpu volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.350 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/cpu volume: 112440000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.374 2 DEBUG nova.network.neutron [req-3c4e9c8f-d126-4738-8b65-195a2bb512eb req-fc2d89a8-f614-4c5e-8f44-ef624a59b7dd fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Updated VIF entry in instance network info cache for port 603f706b-6b06-4ad2-b22b-b118c9d68755. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.375 2 DEBUG nova.network.neutron [req-3c4e9c8f-d126-4738-8b65-195a2bb512eb req-fc2d89a8-f614-4c5e-8f44-ef624a59b7dd fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Updating instance_info_cache with network_info: [{"id": "603f706b-6b06-4ad2-b22b-b118c9d68755", "address": "fa:16:3e:72:b6:fc", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap603f706b-6b", "ovs_interfaceid": "603f706b-6b06-4ad2-b22b-b118c9d68755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.380 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 53940000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.381 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.381 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.381 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.381 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.382 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.382 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 77c85795-42d5-4ba9-bbb5-b7009b5f992f: ceilometer.compute.pollsters.NoVolumeException
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.382 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/memory.usage volume: 43.63671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.383 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.383 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.384 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.384 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:45:16.381958) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.384 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.385 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:45:16.384908) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.388 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 77c85795-42d5-4ba9-bbb5-b7009b5f992f / tap603f706b-6b inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.389 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.392 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.396 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.397 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.398 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.398 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.398 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.398 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.398 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:45:16.398550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.399 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/network.incoming.bytes volume: 110 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.399 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.bytes volume: 1652 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.399 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.400 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.400 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.400 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.400 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.401 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.401 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/power.state volume: 3 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.401 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.401 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.402 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:45:16.401026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.401 2 DEBUG oslo_concurrency.lockutils [req-3c4e9c8f-d126-4738-8b65-195a2bb512eb req-fc2d89a8-f614-4c5e-8f44-ef624a59b7dd fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-77c85795-42d5-4ba9-bbb5-b7009b5f992f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.402 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.402 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.403 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.403 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.403 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.403 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:45:16.403402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.404 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.404 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.405 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.405 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.406 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:45:16.405747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.406 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.406 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.406 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.407 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-02T19:45:16.407913) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.408 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.408 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1268837722>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1268837722>]
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.409 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.409 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.409 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.410 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:45:16.409631) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.432 2 DEBUG nova.compute.manager [req-b08737d2-ebcb-4a48-bf9f-4e3aa8eb6a34 req-337a0676-0def-4a67-9d3f-64ecc07f7607 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Received event network-vif-plugged-603f706b-6b06-4ad2-b22b-b118c9d68755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.433 2 DEBUG oslo_concurrency.lockutils [req-b08737d2-ebcb-4a48-bf9f-4e3aa8eb6a34 req-337a0676-0def-4a67-9d3f-64ecc07f7607 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.434 2 DEBUG oslo_concurrency.lockutils [req-b08737d2-ebcb-4a48-bf9f-4e3aa8eb6a34 req-337a0676-0def-4a67-9d3f-64ecc07f7607 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.434 2 DEBUG oslo_concurrency.lockutils [req-b08737d2-ebcb-4a48-bf9f-4e3aa8eb6a34 req-337a0676-0def-4a67-9d3f-64ecc07f7607 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.435 2 DEBUG nova.compute.manager [req-b08737d2-ebcb-4a48-bf9f-4e3aa8eb6a34 req-337a0676-0def-4a67-9d3f-64ecc07f7607 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Processing event network-vif-plugged-603f706b-6b06-4ad2-b22b-b118c9d68755 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.436 2 DEBUG nova.compute.manager [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.437 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.438 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.439 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434316.4396913, 77c85795-42d5-4ba9-bbb5-b7009b5f992f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.440 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] VM Resumed (Lifecycle Event)
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.456 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.458 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.463 2 INFO nova.virt.libvirt.driver [-] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Instance spawned successfully.
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.463 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.466 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.484 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.485 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.499 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.505 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.506 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.507 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.508 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.509 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.510 2 DEBUG nova.virt.libvirt.driver [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.543 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.543 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.544 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.544 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.545 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.545 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.545 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.545 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.546 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:45:16.545758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.546 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.547 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.bytes.delta volume: 1542 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.547 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.548 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.549 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.549 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.549 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:45:16.549621) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.550 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.550 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.551 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.551 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.551 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.551 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.552 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.552 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.553 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:45:16.552135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.572 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.572 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.580 2 INFO nova.compute.manager [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Took 9.31 seconds to spawn the instance on the hypervisor.
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.581 2 DEBUG nova.compute.manager [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.599 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.599 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.626 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.626 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.626 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.627 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.628 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.628 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.628 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.628 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.latency volume: 1069571389 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.629 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.latency volume: 104981662 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.629 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:45:16.628169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.630 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.630 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.630 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.631 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.631 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.631 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.631 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.632 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.632 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1268837722>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1268837722>]
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.632 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.632 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.632 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.633 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.633 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.633 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.633 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.634 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.634 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.634 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.635 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.635 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.635 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.635 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.635 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.635 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.636 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.636 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.636 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.637 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.637 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.637 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.637 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.637 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.637 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.638 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.638 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.638 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.639 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.639 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.639 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-02T19:45:16.631845) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.639 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:45:16.632957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.639 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:45:16.635713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:45:16.637531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.640 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.640 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.641 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.641 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.641 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.642 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.642 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.642 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.642 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:45:16.641128) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.644 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.644 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.644 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.645 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.latency volume: 5202028856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.645 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.645 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.646 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.646 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:45:16.644506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.647 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.647 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.648 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.648 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.648 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:45:16.647743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.649 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.649 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.649 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.650 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.650 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.650 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.650 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.650 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.650 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.651 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.651 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.requests volume: 327 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.651 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.652 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.652 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.652 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.652 2 INFO nova.compute.manager [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Took 9.94 seconds to build instance.
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.653 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.653 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.653 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.653 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.653 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.653 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.653 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.654 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.654 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.654 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.654 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.655 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.655 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.655 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.655 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.656 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.656 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.657 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.657 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.657 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.657 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.658 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.658 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.658 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:45:16.650828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.658 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:45:16.653737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.658 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:45:16.655365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.659 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:45:16.656600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.659 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:45:16.658005) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.658 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.659 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.660 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.660 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.660 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.660 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.660 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.660 14 DEBUG ceilometer.compute.pollsters [-] 77c85795-42d5-4ba9-bbb5-b7009b5f992f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.660 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.661 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.661 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.661 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:45:16.660416) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.663 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.664 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:45:16.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:45:16 compute-0 nova_compute[194781]: 2025-10-02 19:45:16.670 2 DEBUG oslo_concurrency.lockutils [None req-3345e30e-c0fc-4eba-9973-1c6813a60cc0 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:17 compute-0 nova_compute[194781]: 2025-10-02 19:45:17.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:19 compute-0 nova_compute[194781]: 2025-10-02 19:45:19.396 2 DEBUG nova.compute.manager [req-615645a6-55a4-41dd-9687-cb0b3d3aa511 req-9fa97eef-43aa-40ff-a5c1-8a741d533187 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Received event network-vif-plugged-603f706b-6b06-4ad2-b22b-b118c9d68755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:45:19 compute-0 nova_compute[194781]: 2025-10-02 19:45:19.397 2 DEBUG oslo_concurrency.lockutils [req-615645a6-55a4-41dd-9687-cb0b3d3aa511 req-9fa97eef-43aa-40ff-a5c1-8a741d533187 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:19 compute-0 nova_compute[194781]: 2025-10-02 19:45:19.397 2 DEBUG oslo_concurrency.lockutils [req-615645a6-55a4-41dd-9687-cb0b3d3aa511 req-9fa97eef-43aa-40ff-a5c1-8a741d533187 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:19 compute-0 nova_compute[194781]: 2025-10-02 19:45:19.398 2 DEBUG oslo_concurrency.lockutils [req-615645a6-55a4-41dd-9687-cb0b3d3aa511 req-9fa97eef-43aa-40ff-a5c1-8a741d533187 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:19 compute-0 nova_compute[194781]: 2025-10-02 19:45:19.398 2 DEBUG nova.compute.manager [req-615645a6-55a4-41dd-9687-cb0b3d3aa511 req-9fa97eef-43aa-40ff-a5c1-8a741d533187 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] No waiting events found dispatching network-vif-plugged-603f706b-6b06-4ad2-b22b-b118c9d68755 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:45:19 compute-0 nova_compute[194781]: 2025-10-02 19:45:19.399 2 WARNING nova.compute.manager [req-615645a6-55a4-41dd-9687-cb0b3d3aa511 req-9fa97eef-43aa-40ff-a5c1-8a741d533187 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Received unexpected event network-vif-plugged-603f706b-6b06-4ad2-b22b-b118c9d68755 for instance with vm_state active and task_state None.
Oct 02 19:45:19 compute-0 ovn_controller[97052]: 2025-10-02T19:45:19Z|00170|binding|INFO|Releasing lport aaa6ea3c-0164-44d4-b435-0c6c04e73e3f from this chassis (sb_readonly=0)
Oct 02 19:45:19 compute-0 ovn_controller[97052]: 2025-10-02T19:45:19Z|00171|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:45:19 compute-0 ovn_controller[97052]: 2025-10-02T19:45:19Z|00172|binding|INFO|Releasing lport fb07e353-d679-475b-a1f5-b73dcea986a1 from this chassis (sb_readonly=0)
Oct 02 19:45:19 compute-0 nova_compute[194781]: 2025-10-02 19:45:19.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:21 compute-0 nova_compute[194781]: 2025-10-02 19:45:21.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:22 compute-0 nova_compute[194781]: 2025-10-02 19:45:22.143 2 DEBUG nova.compute.manager [req-e392904b-fed4-4bef-9a91-7e3ad1745149 req-438ac691-aba1-47e0-8684-1fe0a4acbec1 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Received event network-changed-603f706b-6b06-4ad2-b22b-b118c9d68755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:45:22 compute-0 nova_compute[194781]: 2025-10-02 19:45:22.143 2 DEBUG nova.compute.manager [req-e392904b-fed4-4bef-9a91-7e3ad1745149 req-438ac691-aba1-47e0-8684-1fe0a4acbec1 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Refreshing instance network info cache due to event network-changed-603f706b-6b06-4ad2-b22b-b118c9d68755. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:45:22 compute-0 nova_compute[194781]: 2025-10-02 19:45:22.144 2 DEBUG oslo_concurrency.lockutils [req-e392904b-fed4-4bef-9a91-7e3ad1745149 req-438ac691-aba1-47e0-8684-1fe0a4acbec1 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-77c85795-42d5-4ba9-bbb5-b7009b5f992f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:45:22 compute-0 nova_compute[194781]: 2025-10-02 19:45:22.144 2 DEBUG oslo_concurrency.lockutils [req-e392904b-fed4-4bef-9a91-7e3ad1745149 req-438ac691-aba1-47e0-8684-1fe0a4acbec1 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-77c85795-42d5-4ba9-bbb5-b7009b5f992f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:45:22 compute-0 nova_compute[194781]: 2025-10-02 19:45:22.144 2 DEBUG nova.network.neutron [req-e392904b-fed4-4bef-9a91-7e3ad1745149 req-438ac691-aba1-47e0-8684-1fe0a4acbec1 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Refreshing network info cache for port 603f706b-6b06-4ad2-b22b-b118c9d68755 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:45:22 compute-0 nova_compute[194781]: 2025-10-02 19:45:22.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:22 compute-0 podman[261264]: 2025-10-02 19:45:22.720683567 +0000 UTC m=+0.079666665 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 19:45:22 compute-0 podman[261263]: 2025-10-02 19:45:22.747615985 +0000 UTC m=+0.115128401 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:45:23 compute-0 nova_compute[194781]: 2025-10-02 19:45:23.382 2 DEBUG nova.network.neutron [req-e392904b-fed4-4bef-9a91-7e3ad1745149 req-438ac691-aba1-47e0-8684-1fe0a4acbec1 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Updated VIF entry in instance network info cache for port 603f706b-6b06-4ad2-b22b-b118c9d68755. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:45:23 compute-0 nova_compute[194781]: 2025-10-02 19:45:23.384 2 DEBUG nova.network.neutron [req-e392904b-fed4-4bef-9a91-7e3ad1745149 req-438ac691-aba1-47e0-8684-1fe0a4acbec1 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Updating instance_info_cache with network_info: [{"id": "603f706b-6b06-4ad2-b22b-b118c9d68755", "address": "fa:16:3e:72:b6:fc", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap603f706b-6b", "ovs_interfaceid": "603f706b-6b06-4ad2-b22b-b118c9d68755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:45:23 compute-0 nova_compute[194781]: 2025-10-02 19:45:23.407 2 DEBUG oslo_concurrency.lockutils [req-e392904b-fed4-4bef-9a91-7e3ad1745149 req-438ac691-aba1-47e0-8684-1fe0a4acbec1 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-77c85795-42d5-4ba9-bbb5-b7009b5f992f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:45:25 compute-0 podman[261303]: 2025-10-02 19:45:25.744549787 +0000 UTC m=+0.118312726 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:45:25 compute-0 podman[261304]: 2025-10-02 19:45:25.784733388 +0000 UTC m=+0.155819106 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller)
Oct 02 19:45:26 compute-0 nova_compute[194781]: 2025-10-02 19:45:26.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:27 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:27.028 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:45:27 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:27.033 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:45:27 compute-0 nova_compute[194781]: 2025-10-02 19:45:27.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:27 compute-0 nova_compute[194781]: 2025-10-02 19:45:27.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:28 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:28.038 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:29 compute-0 podman[209015]: time="2025-10-02T19:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:45:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34447 "" "Go-http-client/1.1"
Oct 02 19:45:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6146 "" "Go-http-client/1.1"
Oct 02 19:45:31 compute-0 nova_compute[194781]: 2025-10-02 19:45:31.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:31 compute-0 openstack_network_exporter[211160]: ERROR   19:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:45:31 compute-0 openstack_network_exporter[211160]: ERROR   19:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:45:31 compute-0 openstack_network_exporter[211160]: ERROR   19:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:45:31 compute-0 openstack_network_exporter[211160]: ERROR   19:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:45:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:45:31 compute-0 openstack_network_exporter[211160]: ERROR   19:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:45:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:45:31 compute-0 podman[261344]: 2025-10-02 19:45:31.693411308 +0000 UTC m=+0.075333090 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:45:32 compute-0 nova_compute[194781]: 2025-10-02 19:45:32.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:32 compute-0 nova_compute[194781]: 2025-10-02 19:45:32.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:36 compute-0 nova_compute[194781]: 2025-10-02 19:45:36.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:36 compute-0 nova_compute[194781]: 2025-10-02 19:45:36.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:37 compute-0 nova_compute[194781]: 2025-10-02 19:45:37.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:37 compute-0 nova_compute[194781]: 2025-10-02 19:45:37.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:45:37 compute-0 nova_compute[194781]: 2025-10-02 19:45:37.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.066 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.067 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.068 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.068 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.214 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.316 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.317 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.414 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.422 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.501 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.502 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.584 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.596 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.654 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.655 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.714 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.715 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.772 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.774 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:38 compute-0 nova_compute[194781]: 2025-10-02 19:45:38.847 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:39 compute-0 nova_compute[194781]: 2025-10-02 19:45:39.273 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:45:39 compute-0 nova_compute[194781]: 2025-10-02 19:45:39.275 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4762MB free_disk=72.38126373291016GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:45:39 compute-0 nova_compute[194781]: 2025-10-02 19:45:39.275 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:39 compute-0 nova_compute[194781]: 2025-10-02 19:45:39.276 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:39 compute-0 nova_compute[194781]: 2025-10-02 19:45:39.478 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:45:39 compute-0 nova_compute[194781]: 2025-10-02 19:45:39.479 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:45:39 compute-0 nova_compute[194781]: 2025-10-02 19:45:39.479 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 77c85795-42d5-4ba9-bbb5-b7009b5f992f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:45:39 compute-0 nova_compute[194781]: 2025-10-02 19:45:39.480 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:45:39 compute-0 nova_compute[194781]: 2025-10-02 19:45:39.480 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:45:39 compute-0 nova_compute[194781]: 2025-10-02 19:45:39.645 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing inventories for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 19:45:39 compute-0 nova_compute[194781]: 2025-10-02 19:45:39.926 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating ProviderTree inventory for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 19:45:39 compute-0 nova_compute[194781]: 2025-10-02 19:45:39.928 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:45:39 compute-0 nova_compute[194781]: 2025-10-02 19:45:39.948 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing aggregate associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 19:45:39 compute-0 nova_compute[194781]: 2025-10-02 19:45:39.970 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing trait associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,HW_CPU_X86_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 19:45:40 compute-0 nova_compute[194781]: 2025-10-02 19:45:40.058 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:45:40 compute-0 nova_compute[194781]: 2025-10-02 19:45:40.077 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:45:40 compute-0 nova_compute[194781]: 2025-10-02 19:45:40.105 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:45:40 compute-0 nova_compute[194781]: 2025-10-02 19:45:40.106 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:41 compute-0 nova_compute[194781]: 2025-10-02 19:45:41.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:41 compute-0 nova_compute[194781]: 2025-10-02 19:45:41.106 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:41 compute-0 nova_compute[194781]: 2025-10-02 19:45:41.107 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:42 compute-0 nova_compute[194781]: 2025-10-02 19:45:42.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:42 compute-0 nova_compute[194781]: 2025-10-02 19:45:42.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:42 compute-0 nova_compute[194781]: 2025-10-02 19:45:42.313 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "ead9703a-68cd-4f65-a0dd-296c0a357b90" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:42 compute-0 nova_compute[194781]: 2025-10-02 19:45:42.314 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:42 compute-0 nova_compute[194781]: 2025-10-02 19:45:42.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:42 compute-0 nova_compute[194781]: 2025-10-02 19:45:42.547 2 DEBUG nova.compute.manager [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:45:42 compute-0 podman[261394]: 2025-10-02 19:45:42.744724881 +0000 UTC m=+0.108702910 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Oct 02 19:45:42 compute-0 podman[261395]: 2025-10-02 19:45:42.769753348 +0000 UTC m=+0.124893701 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm)
Oct 02 19:45:42 compute-0 nova_compute[194781]: 2025-10-02 19:45:42.776 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:42 compute-0 nova_compute[194781]: 2025-10-02 19:45:42.777 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:42 compute-0 nova_compute[194781]: 2025-10-02 19:45:42.785 2 DEBUG nova.virt.hardware [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:45:42 compute-0 nova_compute[194781]: 2025-10-02 19:45:42.786 2 INFO nova.compute.claims [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:45:42 compute-0 nova_compute[194781]: 2025-10-02 19:45:42.992 2 DEBUG nova.compute.provider_tree [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.020 2 DEBUG nova.scheduler.client.report [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.055 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.056 2 DEBUG nova.compute.manager [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.147 2 DEBUG nova.compute.manager [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.147 2 DEBUG nova.network.neutron [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.190 2 INFO nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.271 2 DEBUG nova.compute.manager [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.406 2 DEBUG nova.compute.manager [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.407 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.407 2 INFO nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Creating image(s)
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.408 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "/var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.408 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "/var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.408 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "/var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.432 2 DEBUG oslo_concurrency.processutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.529 2 DEBUG oslo_concurrency.processutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.532 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.534 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.558 2 DEBUG oslo_concurrency.processutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.641 2 DEBUG oslo_concurrency.processutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.642 2 DEBUG oslo_concurrency.processutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e,backing_fmt=raw /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.703 2 DEBUG oslo_concurrency.processutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e,backing_fmt=raw /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk 1073741824" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.705 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.705 2 DEBUG oslo_concurrency.processutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.773 2 DEBUG oslo_concurrency.processutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.774 2 DEBUG nova.virt.disk.api [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Checking if we can resize image /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.775 2 DEBUG oslo_concurrency.processutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.855 2 DEBUG oslo_concurrency.processutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.857 2 DEBUG nova.virt.disk.api [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Cannot resize image /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.857 2 DEBUG nova.objects.instance [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lazy-loading 'migration_context' on Instance uuid ead9703a-68cd-4f65-a0dd-296c0a357b90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.934 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.935 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Ensure instance console log exists: /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.936 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.936 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:43 compute-0 nova_compute[194781]: 2025-10-02 19:45:43.937 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:44 compute-0 nova_compute[194781]: 2025-10-02 19:45:44.495 2 DEBUG nova.policy [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '23b5415980f24bbbbfa331c702f6f7d9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3dae65399d7c47999282bff6664f6d16', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 19:45:44 compute-0 podman[261444]: 2025-10-02 19:45:44.760363515 +0000 UTC m=+0.117594156 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350)
Oct 02 19:45:44 compute-0 podman[261445]: 2025-10-02 19:45:44.770465885 +0000 UTC m=+0.118600914 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible)
Oct 02 19:45:44 compute-0 podman[261446]: 2025-10-02 19:45:44.78603101 +0000 UTC m=+0.133125801 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:45:45 compute-0 nova_compute[194781]: 2025-10-02 19:45:45.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:45 compute-0 nova_compute[194781]: 2025-10-02 19:45:45.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 19:45:45 compute-0 nova_compute[194781]: 2025-10-02 19:45:45.062 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 19:45:46 compute-0 nova_compute[194781]: 2025-10-02 19:45:46.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:46 compute-0 nova_compute[194781]: 2025-10-02 19:45:46.477 2 DEBUG nova.network.neutron [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Successfully created port: 722eab1f-2c73-4b59-9732-99ee52407450 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 19:45:47 compute-0 nova_compute[194781]: 2025-10-02 19:45:47.350 2 DEBUG nova.network.neutron [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Successfully updated port: 722eab1f-2c73-4b59-9732-99ee52407450 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:45:47 compute-0 nova_compute[194781]: 2025-10-02 19:45:47.362 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "refresh_cache-ead9703a-68cd-4f65-a0dd-296c0a357b90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:45:47 compute-0 nova_compute[194781]: 2025-10-02 19:45:47.363 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquired lock "refresh_cache-ead9703a-68cd-4f65-a0dd-296c0a357b90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:45:47 compute-0 nova_compute[194781]: 2025-10-02 19:45:47.363 2 DEBUG nova.network.neutron [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:45:47 compute-0 nova_compute[194781]: 2025-10-02 19:45:47.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:47.491 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:47.492 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:47.493 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:47 compute-0 nova_compute[194781]: 2025-10-02 19:45:47.496 2 DEBUG nova.compute.manager [req-7b01f616-a158-4b03-84ea-0dba80385afa req-2e51390c-e130-4182-a0b4-c2c98664cb29 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Received event network-changed-722eab1f-2c73-4b59-9732-99ee52407450 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:45:47 compute-0 nova_compute[194781]: 2025-10-02 19:45:47.497 2 DEBUG nova.compute.manager [req-7b01f616-a158-4b03-84ea-0dba80385afa req-2e51390c-e130-4182-a0b4-c2c98664cb29 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Refreshing instance network info cache due to event network-changed-722eab1f-2c73-4b59-9732-99ee52407450. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:45:47 compute-0 nova_compute[194781]: 2025-10-02 19:45:47.497 2 DEBUG oslo_concurrency.lockutils [req-7b01f616-a158-4b03-84ea-0dba80385afa req-2e51390c-e130-4182-a0b4-c2c98664cb29 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-ead9703a-68cd-4f65-a0dd-296c0a357b90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:45:47 compute-0 nova_compute[194781]: 2025-10-02 19:45:47.574 2 DEBUG nova.network.neutron [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.227 2 DEBUG nova.network.neutron [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Updating instance_info_cache with network_info: [{"id": "722eab1f-2c73-4b59-9732-99ee52407450", "address": "fa:16:3e:c7:57:cd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722eab1f-2c", "ovs_interfaceid": "722eab1f-2c73-4b59-9732-99ee52407450", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.272 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Releasing lock "refresh_cache-ead9703a-68cd-4f65-a0dd-296c0a357b90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.273 2 DEBUG nova.compute.manager [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Instance network_info: |[{"id": "722eab1f-2c73-4b59-9732-99ee52407450", "address": "fa:16:3e:c7:57:cd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722eab1f-2c", "ovs_interfaceid": "722eab1f-2c73-4b59-9732-99ee52407450", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.273 2 DEBUG oslo_concurrency.lockutils [req-7b01f616-a158-4b03-84ea-0dba80385afa req-2e51390c-e130-4182-a0b4-c2c98664cb29 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-ead9703a-68cd-4f65-a0dd-296c0a357b90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.274 2 DEBUG nova.network.neutron [req-7b01f616-a158-4b03-84ea-0dba80385afa req-2e51390c-e130-4182-a0b4-c2c98664cb29 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Refreshing network info cache for port 722eab1f-2c73-4b59-9732-99ee52407450 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.276 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Start _get_guest_xml network_info=[{"id": "722eab1f-2c73-4b59-9732-99ee52407450", "address": "fa:16:3e:c7:57:cd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722eab1f-2c", "ovs_interfaceid": "722eab1f-2c73-4b59-9732-99ee52407450", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:42:55Z,direct_url=<?>,disk_format='qcow2',id=b43dc593-d176-449d-a8d5-95d53b8e1b5e,min_disk=0,min_ram=0,name='tempest-scenario-img--1036197514',owner='3dae65399d7c47999282bff6664f6d16',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:42:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': 'b43dc593-d176-449d-a8d5-95d53b8e1b5e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.294 2 WARNING nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.304 2 DEBUG nova.virt.libvirt.host [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.305 2 DEBUG nova.virt.libvirt.host [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.310 2 DEBUG nova.virt.libvirt.host [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.311 2 DEBUG nova.virt.libvirt.host [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.312 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.313 2 DEBUG nova.virt.hardware [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:40:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7ab5ea96-81dd-4496-8a1f-012f7d2c53c5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:42:55Z,direct_url=<?>,disk_format='qcow2',id=b43dc593-d176-449d-a8d5-95d53b8e1b5e,min_disk=0,min_ram=0,name='tempest-scenario-img--1036197514',owner='3dae65399d7c47999282bff6664f6d16',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:42:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.315 2 DEBUG nova.virt.hardware [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.316 2 DEBUG nova.virt.hardware [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.317 2 DEBUG nova.virt.hardware [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.318 2 DEBUG nova.virt.hardware [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.319 2 DEBUG nova.virt.hardware [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.320 2 DEBUG nova.virt.hardware [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.322 2 DEBUG nova.virt.hardware [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.323 2 DEBUG nova.virt.hardware [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.324 2 DEBUG nova.virt.hardware [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.325 2 DEBUG nova.virt.hardware [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.331 2 DEBUG nova.virt.libvirt.vif [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta',id=14,image_ref='b43dc593-d176-449d-a8d5-95d53b8e1b5e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='d4713e41-6620-49a4-8665-1b2fbe664d9c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3dae65399d7c47999282bff6664f6d16',ramdisk_id='',reservation_id='r-04x7pqzf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='b43dc593-d176-449d-a8d5-95d53b8e1b5e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-732152950',owner_user_name='tempest-PrometheusGabbiTest-732152950-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:45:43Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='23b5415980f24bbbbfa331c702f6f7d9',uuid=ead9703a-68cd-4f65-a0dd-296c0a357b90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "722eab1f-2c73-4b59-9732-99ee52407450", "address": "fa:16:3e:c7:57:cd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722eab1f-2c", "ovs_interfaceid": "722eab1f-2c73-4b59-9732-99ee52407450", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.332 2 DEBUG nova.network.os_vif_util [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Converting VIF {"id": "722eab1f-2c73-4b59-9732-99ee52407450", "address": "fa:16:3e:c7:57:cd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722eab1f-2c", "ovs_interfaceid": "722eab1f-2c73-4b59-9732-99ee52407450", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.333 2 DEBUG nova.network.os_vif_util [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:57:cd,bridge_name='br-int',has_traffic_filtering=True,id=722eab1f-2c73-4b59-9732-99ee52407450,network=Network(b8407621-6f3e-4864-b018-8cf0d0e8428e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap722eab1f-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.335 2 DEBUG nova.objects.instance [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lazy-loading 'pci_devices' on Instance uuid ead9703a-68cd-4f65-a0dd-296c0a357b90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.337 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.338 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.339 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.362 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:45:49 compute-0 nova_compute[194781]:   <uuid>ead9703a-68cd-4f65-a0dd-296c0a357b90</uuid>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   <name>instance-0000000e</name>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   <memory>131072</memory>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <nova:name>te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta</nova:name>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:45:49</nova:creationTime>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <nova:flavor name="m1.nano">
Oct 02 19:45:49 compute-0 nova_compute[194781]:         <nova:memory>128</nova:memory>
Oct 02 19:45:49 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:45:49 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:45:49 compute-0 nova_compute[194781]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 19:45:49 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:45:49 compute-0 nova_compute[194781]:         <nova:user uuid="23b5415980f24bbbbfa331c702f6f7d9">tempest-PrometheusGabbiTest-732152950-project-member</nova:user>
Oct 02 19:45:49 compute-0 nova_compute[194781]:         <nova:project uuid="3dae65399d7c47999282bff6664f6d16">tempest-PrometheusGabbiTest-732152950</nova:project>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="b43dc593-d176-449d-a8d5-95d53b8e1b5e"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:45:49 compute-0 nova_compute[194781]:         <nova:port uuid="722eab1f-2c73-4b59-9732-99ee52407450">
Oct 02 19:45:49 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="10.100.0.62" ipVersion="4"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <system>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <entry name="serial">ead9703a-68cd-4f65-a0dd-296c0a357b90</entry>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <entry name="uuid">ead9703a-68cd-4f65-a0dd-296c0a357b90</entry>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     </system>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   <os>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   </os>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   <features>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   </features>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.config"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:c7:57:cd"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <target dev="tap722eab1f-2c"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/console.log" append="off"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <video>
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     </video>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:45:49 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:45:49 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:45:49 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:45:49 compute-0 nova_compute[194781]: </domain>
Oct 02 19:45:49 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.371 2 DEBUG nova.compute.manager [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Preparing to wait for external event network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.373 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.373 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.374 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.376 2 DEBUG nova.virt.libvirt.vif [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta',id=14,image_ref='b43dc593-d176-449d-a8d5-95d53b8e1b5e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='d4713e41-6620-49a4-8665-1b2fbe664d9c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3dae65399d7c47999282bff6664f6d16',ramdisk_id='',reservation_id='r-04x7pqzf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='b43dc593-d176-449d-a8d5-95d53b8e1b5e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-732152950',owner_user_name='tempest-PrometheusGabbiTest-732152950-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:45:43Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='23b5415980f24bbbbfa331c702f6f7d9',uuid=ead9703a-68cd-4f65-a0dd-296c0a357b90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "722eab1f-2c73-4b59-9732-99ee52407450", "address": "fa:16:3e:c7:57:cd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722eab1f-2c", "ovs_interfaceid": "722eab1f-2c73-4b59-9732-99ee52407450", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.377 2 DEBUG nova.network.os_vif_util [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Converting VIF {"id": "722eab1f-2c73-4b59-9732-99ee52407450", "address": "fa:16:3e:c7:57:cd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722eab1f-2c", "ovs_interfaceid": "722eab1f-2c73-4b59-9732-99ee52407450", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.378 2 DEBUG nova.network.os_vif_util [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:57:cd,bridge_name='br-int',has_traffic_filtering=True,id=722eab1f-2c73-4b59-9732-99ee52407450,network=Network(b8407621-6f3e-4864-b018-8cf0d0e8428e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap722eab1f-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.379 2 DEBUG os_vif [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:57:cd,bridge_name='br-int',has_traffic_filtering=True,id=722eab1f-2c73-4b59-9732-99ee52407450,network=Network(b8407621-6f3e-4864-b018-8cf0d0e8428e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap722eab1f-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.383 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.385 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.391 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap722eab1f-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap722eab1f-2c, col_values=(('external_ids', {'iface-id': '722eab1f-2c73-4b59-9732-99ee52407450', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:57:cd', 'vm-uuid': 'ead9703a-68cd-4f65-a0dd-296c0a357b90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:49 compute-0 NetworkManager[52324]: <info>  [1759434349.3954] manager: (tap722eab1f-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.404 2 INFO os_vif [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:57:cd,bridge_name='br-int',has_traffic_filtering=True,id=722eab1f-2c73-4b59-9732-99ee52407450,network=Network(b8407621-6f3e-4864-b018-8cf0d0e8428e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap722eab1f-2c')
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.457 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.457 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.458 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] No VIF found with MAC fa:16:3e:c7:57:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:45:49 compute-0 nova_compute[194781]: 2025-10-02 19:45:49.458 2 INFO nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Using config drive
Oct 02 19:45:49 compute-0 ovn_controller[97052]: 2025-10-02T19:45:49Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:b6:fc 10.100.0.11
Oct 02 19:45:49 compute-0 ovn_controller[97052]: 2025-10-02T19:45:49Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:b6:fc 10.100.0.11
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.243 2 INFO nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Creating config drive at /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.config
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.252 2 DEBUG oslo_concurrency.processutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwb878xpf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.390 2 DEBUG oslo_concurrency.processutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwb878xpf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:45:50 compute-0 kernel: tap722eab1f-2c: entered promiscuous mode
Oct 02 19:45:50 compute-0 NetworkManager[52324]: <info>  [1759434350.4674] manager: (tap722eab1f-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:50 compute-0 ovn_controller[97052]: 2025-10-02T19:45:50Z|00173|binding|INFO|Claiming lport 722eab1f-2c73-4b59-9732-99ee52407450 for this chassis.
Oct 02 19:45:50 compute-0 ovn_controller[97052]: 2025-10-02T19:45:50Z|00174|binding|INFO|722eab1f-2c73-4b59-9732-99ee52407450: Claiming fa:16:3e:c7:57:cd 10.100.0.62
Oct 02 19:45:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:50.480 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:57:cd 10.100.0.62'], port_security=['fa:16:3e:c7:57:cd 10.100.0.62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.62/16', 'neutron:device_id': 'ead9703a-68cd-4f65-a0dd-296c0a357b90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3dae65399d7c47999282bff6664f6d16', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb16109a-6359-4dd8-bfae-0a7015239961', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31c9bff4-971d-41c4-a82c-3f2067f94d21, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=722eab1f-2c73-4b59-9732-99ee52407450) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:45:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:50.481 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 722eab1f-2c73-4b59-9732-99ee52407450 in datapath b8407621-6f3e-4864-b018-8cf0d0e8428e bound to our chassis
Oct 02 19:45:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:50.483 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8407621-6f3e-4864-b018-8cf0d0e8428e
Oct 02 19:45:50 compute-0 ovn_controller[97052]: 2025-10-02T19:45:50Z|00175|binding|INFO|Setting lport 722eab1f-2c73-4b59-9732-99ee52407450 ovn-installed in OVS
Oct 02 19:45:50 compute-0 ovn_controller[97052]: 2025-10-02T19:45:50Z|00176|binding|INFO|Setting lport 722eab1f-2c73-4b59-9732-99ee52407450 up in Southbound
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:50 compute-0 systemd-machined[154795]: New machine qemu-15-instance-0000000e.
Oct 02 19:45:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:50.512 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[187f6689-a84b-4f5b-b607-7a9e61100d28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:50 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Oct 02 19:45:50 compute-0 systemd-udevd[261541]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:45:50 compute-0 NetworkManager[52324]: <info>  [1759434350.5593] device (tap722eab1f-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:45:50 compute-0 NetworkManager[52324]: <info>  [1759434350.5633] device (tap722eab1f-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:45:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:50.566 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[04f45dd6-aa4b-4553-ae06-75c9e288b576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:50.574 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9ad803-9b7b-486c-b2e2-f190c31141b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:50.606 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[4b11aa25-d588-49eb-b298-c53651f3af35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:50.642 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[039fa3bd-3e04-40c5-a2ef-cdfd3e150c95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8407621-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:a6:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535296, 'reachable_time': 30073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261553, 'error': None, 'target': 'ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:50.667 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[fb747724-d63c-40c8-a9f0-7916bf10a7ba]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapb8407621-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535314, 'tstamp': 535314}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261554, 'error': None, 'target': 'ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb8407621-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535319, 'tstamp': 535319}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261554, 'error': None, 'target': 'ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:45:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:50.670 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8407621-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:50.675 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8407621-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:50.676 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:45:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:50.677 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8407621-60, col_values=(('external_ids', {'iface-id': 'aaa6ea3c-0164-44d4-b435-0c6c04e73e3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:45:50 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:45:50.677 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.693 2 DEBUG nova.compute.manager [req-3e75650f-6876-4b4e-beec-6f66d77fde27 req-664647e4-cd4b-4fce-af98-65e1ee31e36d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Received event network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.693 2 DEBUG oslo_concurrency.lockutils [req-3e75650f-6876-4b4e-beec-6f66d77fde27 req-664647e4-cd4b-4fce-af98-65e1ee31e36d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.694 2 DEBUG oslo_concurrency.lockutils [req-3e75650f-6876-4b4e-beec-6f66d77fde27 req-664647e4-cd4b-4fce-af98-65e1ee31e36d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.694 2 DEBUG oslo_concurrency.lockutils [req-3e75650f-6876-4b4e-beec-6f66d77fde27 req-664647e4-cd4b-4fce-af98-65e1ee31e36d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.694 2 DEBUG nova.compute.manager [req-3e75650f-6876-4b4e-beec-6f66d77fde27 req-664647e4-cd4b-4fce-af98-65e1ee31e36d fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Processing event network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.808 2 DEBUG nova.network.neutron [req-7b01f616-a158-4b03-84ea-0dba80385afa req-2e51390c-e130-4182-a0b4-c2c98664cb29 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Updated VIF entry in instance network info cache for port 722eab1f-2c73-4b59-9732-99ee52407450. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.809 2 DEBUG nova.network.neutron [req-7b01f616-a158-4b03-84ea-0dba80385afa req-2e51390c-e130-4182-a0b4-c2c98664cb29 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Updating instance_info_cache with network_info: [{"id": "722eab1f-2c73-4b59-9732-99ee52407450", "address": "fa:16:3e:c7:57:cd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722eab1f-2c", "ovs_interfaceid": "722eab1f-2c73-4b59-9732-99ee52407450", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:45:50 compute-0 nova_compute[194781]: 2025-10-02 19:45:50.827 2 DEBUG oslo_concurrency.lockutils [req-7b01f616-a158-4b03-84ea-0dba80385afa req-2e51390c-e130-4182-a0b4-c2c98664cb29 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-ead9703a-68cd-4f65-a0dd-296c0a357b90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.497 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Updating instance_info_cache with network_info: [{"id": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "address": "fa:16:3e:e2:c6:bd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45b53db0-b1", "ovs_interfaceid": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.517 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.517 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.518 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.519 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.688 2 DEBUG nova.compute.manager [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.689 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434351.6881168, ead9703a-68cd-4f65-a0dd-296c0a357b90 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.690 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] VM Started (Lifecycle Event)
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.698 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.703 2 INFO nova.virt.libvirt.driver [-] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Instance spawned successfully.
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.703 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.714 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.720 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.731 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.731 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.732 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.732 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.732 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.733 2 DEBUG nova.virt.libvirt.driver [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.747 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.747 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434351.6883907, ead9703a-68cd-4f65-a0dd-296c0a357b90 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.747 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] VM Paused (Lifecycle Event)
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.774 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.786 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434351.6944644, ead9703a-68cd-4f65-a0dd-296c0a357b90 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.786 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] VM Resumed (Lifecycle Event)
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.807 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.812 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.844 2 INFO nova.compute.manager [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Took 8.44 seconds to spawn the instance on the hypervisor.
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.845 2 DEBUG nova.compute.manager [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.854 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:45:51 compute-0 nova_compute[194781]: 2025-10-02 19:45:51.947 2 INFO nova.compute.manager [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Took 9.19 seconds to build instance.
Oct 02 19:45:52 compute-0 nova_compute[194781]: 2025-10-02 19:45:52.033 2 DEBUG oslo_concurrency.lockutils [None req-c10a0c0a-f9d9-4c56-8eb3-ee14222aebb7 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:52 compute-0 nova_compute[194781]: 2025-10-02 19:45:52.790 2 DEBUG nova.compute.manager [req-1b8855c5-21ac-4e6a-9de5-44a8e9ed5346 req-dc285975-7de7-4034-99db-c4ea0f800f6b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Received event network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:45:52 compute-0 nova_compute[194781]: 2025-10-02 19:45:52.791 2 DEBUG oslo_concurrency.lockutils [req-1b8855c5-21ac-4e6a-9de5-44a8e9ed5346 req-dc285975-7de7-4034-99db-c4ea0f800f6b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:45:52 compute-0 nova_compute[194781]: 2025-10-02 19:45:52.791 2 DEBUG oslo_concurrency.lockutils [req-1b8855c5-21ac-4e6a-9de5-44a8e9ed5346 req-dc285975-7de7-4034-99db-c4ea0f800f6b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:45:52 compute-0 nova_compute[194781]: 2025-10-02 19:45:52.791 2 DEBUG oslo_concurrency.lockutils [req-1b8855c5-21ac-4e6a-9de5-44a8e9ed5346 req-dc285975-7de7-4034-99db-c4ea0f800f6b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:45:52 compute-0 nova_compute[194781]: 2025-10-02 19:45:52.791 2 DEBUG nova.compute.manager [req-1b8855c5-21ac-4e6a-9de5-44a8e9ed5346 req-dc285975-7de7-4034-99db-c4ea0f800f6b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] No waiting events found dispatching network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:45:52 compute-0 nova_compute[194781]: 2025-10-02 19:45:52.792 2 WARNING nova.compute.manager [req-1b8855c5-21ac-4e6a-9de5-44a8e9ed5346 req-dc285975-7de7-4034-99db-c4ea0f800f6b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Received unexpected event network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 for instance with vm_state active and task_state None.
Oct 02 19:45:53 compute-0 podman[261563]: 2025-10-02 19:45:53.734273987 +0000 UTC m=+0.089388364 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 19:45:53 compute-0 podman[261562]: 2025-10-02 19:45:53.739400194 +0000 UTC m=+0.110110667 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:45:54 compute-0 nova_compute[194781]: 2025-10-02 19:45:54.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:55 compute-0 nova_compute[194781]: 2025-10-02 19:45:55.452 2 INFO nova.compute.manager [None req-768689b6-5268-4d81-8e6e-2b95cde9862f 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Get console output
Oct 02 19:45:55 compute-0 nova_compute[194781]: 2025-10-02 19:45:55.570 52 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 19:45:56 compute-0 nova_compute[194781]: 2025-10-02 19:45:56.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:56 compute-0 podman[261605]: 2025-10-02 19:45:56.741810301 +0000 UTC m=+0.106498831 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 02 19:45:56 compute-0 podman[261606]: 2025-10-02 19:45:56.805030886 +0000 UTC m=+0.143368164 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 19:45:58 compute-0 nova_compute[194781]: 2025-10-02 19:45:58.535 2 DEBUG nova.compute.manager [req-d51a66c2-c2ae-45b4-862b-d2942765c2c3 req-88874df8-71f9-4cae-af59-be1f40f56f97 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Received event network-changed-603f706b-6b06-4ad2-b22b-b118c9d68755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:45:58 compute-0 nova_compute[194781]: 2025-10-02 19:45:58.536 2 DEBUG nova.compute.manager [req-d51a66c2-c2ae-45b4-862b-d2942765c2c3 req-88874df8-71f9-4cae-af59-be1f40f56f97 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Refreshing instance network info cache due to event network-changed-603f706b-6b06-4ad2-b22b-b118c9d68755. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:45:58 compute-0 nova_compute[194781]: 2025-10-02 19:45:58.536 2 DEBUG oslo_concurrency.lockutils [req-d51a66c2-c2ae-45b4-862b-d2942765c2c3 req-88874df8-71f9-4cae-af59-be1f40f56f97 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-77c85795-42d5-4ba9-bbb5-b7009b5f992f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:45:58 compute-0 nova_compute[194781]: 2025-10-02 19:45:58.537 2 DEBUG oslo_concurrency.lockutils [req-d51a66c2-c2ae-45b4-862b-d2942765c2c3 req-88874df8-71f9-4cae-af59-be1f40f56f97 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-77c85795-42d5-4ba9-bbb5-b7009b5f992f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:45:58 compute-0 nova_compute[194781]: 2025-10-02 19:45:58.537 2 DEBUG nova.network.neutron [req-d51a66c2-c2ae-45b4-862b-d2942765c2c3 req-88874df8-71f9-4cae-af59-be1f40f56f97 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Refreshing network info cache for port 603f706b-6b06-4ad2-b22b-b118c9d68755 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:45:59 compute-0 nova_compute[194781]: 2025-10-02 19:45:59.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:45:59 compute-0 podman[209015]: time="2025-10-02T19:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:45:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34447 "" "Go-http-client/1.1"
Oct 02 19:45:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6155 "" "Go-http-client/1.1"
Oct 02 19:45:59 compute-0 nova_compute[194781]: 2025-10-02 19:45:59.814 2 DEBUG nova.network.neutron [req-d51a66c2-c2ae-45b4-862b-d2942765c2c3 req-88874df8-71f9-4cae-af59-be1f40f56f97 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Updated VIF entry in instance network info cache for port 603f706b-6b06-4ad2-b22b-b118c9d68755. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:45:59 compute-0 nova_compute[194781]: 2025-10-02 19:45:59.815 2 DEBUG nova.network.neutron [req-d51a66c2-c2ae-45b4-862b-d2942765c2c3 req-88874df8-71f9-4cae-af59-be1f40f56f97 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Updating instance_info_cache with network_info: [{"id": "603f706b-6b06-4ad2-b22b-b118c9d68755", "address": "fa:16:3e:72:b6:fc", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap603f706b-6b", "ovs_interfaceid": "603f706b-6b06-4ad2-b22b-b118c9d68755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:45:59 compute-0 nova_compute[194781]: 2025-10-02 19:45:59.836 2 DEBUG oslo_concurrency.lockutils [req-d51a66c2-c2ae-45b4-862b-d2942765c2c3 req-88874df8-71f9-4cae-af59-be1f40f56f97 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-77c85795-42d5-4ba9-bbb5-b7009b5f992f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:46:00 compute-0 nova_compute[194781]: 2025-10-02 19:46:00.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:01 compute-0 nova_compute[194781]: 2025-10-02 19:46:01.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:01 compute-0 openstack_network_exporter[211160]: ERROR   19:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:46:01 compute-0 openstack_network_exporter[211160]: ERROR   19:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:46:01 compute-0 openstack_network_exporter[211160]: ERROR   19:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:46:01 compute-0 openstack_network_exporter[211160]: ERROR   19:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:46:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:46:01 compute-0 openstack_network_exporter[211160]: ERROR   19:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:46:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:46:02 compute-0 podman[261646]: 2025-10-02 19:46:02.749547251 +0000 UTC m=+0.115716516 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:46:04 compute-0 nova_compute[194781]: 2025-10-02 19:46:04.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:05 compute-0 nova_compute[194781]: 2025-10-02 19:46:05.758 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "573d9025-53e1-4cfe-b8ab-d19f024da535" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:05 compute-0 nova_compute[194781]: 2025-10-02 19:46:05.759 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:05 compute-0 nova_compute[194781]: 2025-10-02 19:46:05.777 2 DEBUG nova.compute.manager [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 19:46:05 compute-0 nova_compute[194781]: 2025-10-02 19:46:05.849 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:05 compute-0 nova_compute[194781]: 2025-10-02 19:46:05.850 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:05 compute-0 nova_compute[194781]: 2025-10-02 19:46:05.857 2 DEBUG nova.virt.hardware [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 19:46:05 compute-0 nova_compute[194781]: 2025-10-02 19:46:05.858 2 INFO nova.compute.claims [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Claim successful on node compute-0.ctlplane.example.com
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.069 2 DEBUG nova.compute.provider_tree [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.085 2 DEBUG nova.scheduler.client.report [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.105 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.106 2 DEBUG nova.compute.manager [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.156 2 DEBUG nova.compute.manager [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.156 2 DEBUG nova.network.neutron [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.176 2 INFO nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.210 2 DEBUG nova.compute.manager [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.326 2 DEBUG nova.compute.manager [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.328 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.329 2 INFO nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Creating image(s)
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.330 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "/var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.332 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "/var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.333 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "/var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.360 2 DEBUG oslo_concurrency.processutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.425 2 DEBUG oslo_concurrency.processutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.428 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "a9843d922d50b317c389e448cbaaf7849a9d0409" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.429 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.465 2 DEBUG oslo_concurrency.processutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.526 2 DEBUG oslo_concurrency.processutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.529 2 DEBUG oslo_concurrency.processutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.584 2 DEBUG oslo_concurrency.processutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409,backing_fmt=raw /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk 1073741824" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.587 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "a9843d922d50b317c389e448cbaaf7849a9d0409" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.589 2 DEBUG oslo_concurrency.processutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.653 2 DEBUG oslo_concurrency.processutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.656 2 DEBUG nova.virt.disk.api [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Checking if we can resize image /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.657 2 DEBUG oslo_concurrency.processutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.724 2 DEBUG oslo_concurrency.processutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.727 2 DEBUG nova.virt.disk.api [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Cannot resize image /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.728 2 DEBUG nova.objects.instance [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lazy-loading 'migration_context' on Instance uuid 573d9025-53e1-4cfe-b8ab-d19f024da535 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.758 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.760 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Ensure instance console log exists: /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.762 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.763 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.764 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:06 compute-0 nova_compute[194781]: 2025-10-02 19:46:06.821 2 DEBUG nova.policy [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4ebdfb48323c4124b435387dfed92c5e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ed4915cd456424c8ac561ce0da33795', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 19:46:09 compute-0 nova_compute[194781]: 2025-10-02 19:46:09.316 2 DEBUG nova.network.neutron [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Successfully created port: e1f9176b-db23-4517-bd66-1fcfe605084c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 19:46:09 compute-0 nova_compute[194781]: 2025-10-02 19:46:09.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:10 compute-0 nova_compute[194781]: 2025-10-02 19:46:10.946 2 DEBUG nova.network.neutron [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Successfully updated port: e1f9176b-db23-4517-bd66-1fcfe605084c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 19:46:10 compute-0 nova_compute[194781]: 2025-10-02 19:46:10.966 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "refresh_cache-573d9025-53e1-4cfe-b8ab-d19f024da535" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:46:10 compute-0 nova_compute[194781]: 2025-10-02 19:46:10.967 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquired lock "refresh_cache-573d9025-53e1-4cfe-b8ab-d19f024da535" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:46:10 compute-0 nova_compute[194781]: 2025-10-02 19:46:10.968 2 DEBUG nova.network.neutron [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 19:46:11 compute-0 nova_compute[194781]: 2025-10-02 19:46:11.045 2 DEBUG nova.compute.manager [req-c6af6c67-8f36-43bb-8bd2-2c24991b5c15 req-dcf2b298-7ca3-4fc0-b8fe-46715987edc2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Received event network-changed-e1f9176b-db23-4517-bd66-1fcfe605084c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:46:11 compute-0 nova_compute[194781]: 2025-10-02 19:46:11.046 2 DEBUG nova.compute.manager [req-c6af6c67-8f36-43bb-8bd2-2c24991b5c15 req-dcf2b298-7ca3-4fc0-b8fe-46715987edc2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Refreshing instance network info cache due to event network-changed-e1f9176b-db23-4517-bd66-1fcfe605084c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:46:11 compute-0 nova_compute[194781]: 2025-10-02 19:46:11.047 2 DEBUG oslo_concurrency.lockutils [req-c6af6c67-8f36-43bb-8bd2-2c24991b5c15 req-dcf2b298-7ca3-4fc0-b8fe-46715987edc2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-573d9025-53e1-4cfe-b8ab-d19f024da535" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:46:11 compute-0 nova_compute[194781]: 2025-10-02 19:46:11.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:11 compute-0 nova_compute[194781]: 2025-10-02 19:46:11.118 2 DEBUG nova.network.neutron [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.526 2 DEBUG nova.network.neutron [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Updating instance_info_cache with network_info: [{"id": "e1f9176b-db23-4517-bd66-1fcfe605084c", "address": "fa:16:3e:f2:8b:1d", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f9176b-db", "ovs_interfaceid": "e1f9176b-db23-4517-bd66-1fcfe605084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.560 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Releasing lock "refresh_cache-573d9025-53e1-4cfe-b8ab-d19f024da535" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.561 2 DEBUG nova.compute.manager [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Instance network_info: |[{"id": "e1f9176b-db23-4517-bd66-1fcfe605084c", "address": "fa:16:3e:f2:8b:1d", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f9176b-db", "ovs_interfaceid": "e1f9176b-db23-4517-bd66-1fcfe605084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.562 2 DEBUG oslo_concurrency.lockutils [req-c6af6c67-8f36-43bb-8bd2-2c24991b5c15 req-dcf2b298-7ca3-4fc0-b8fe-46715987edc2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-573d9025-53e1-4cfe-b8ab-d19f024da535" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.563 2 DEBUG nova.network.neutron [req-c6af6c67-8f36-43bb-8bd2-2c24991b5c15 req-dcf2b298-7ca3-4fc0-b8fe-46715987edc2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Refreshing network info cache for port e1f9176b-db23-4517-bd66-1fcfe605084c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.569 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Start _get_guest_xml network_info=[{"id": "e1f9176b-db23-4517-bd66-1fcfe605084c", "address": "fa:16:3e:f2:8b:1d", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f9176b-db", "ovs_interfaceid": "e1f9176b-db23-4517-bd66-1fcfe605084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_options': None, 'image_id': 'c191839f-7364-41ce-80c8-eff8077fc750'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.587 2 WARNING nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.597 2 DEBUG nova.virt.libvirt.host [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.598 2 DEBUG nova.virt.libvirt.host [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.607 2 DEBUG nova.virt.libvirt.host [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.609 2 DEBUG nova.virt.libvirt.host [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.610 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.610 2 DEBUG nova.virt.hardware [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T19:40:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7ab5ea96-81dd-4496-8a1f-012f7d2c53c5',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T19:40:52Z,direct_url=<?>,disk_format='qcow2',id=c191839f-7364-41ce-80c8-eff8077fc750,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c6bd7784161a4cc3a2e8715feee92228',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T19:40:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.612 2 DEBUG nova.virt.hardware [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.613 2 DEBUG nova.virt.hardware [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.613 2 DEBUG nova.virt.hardware [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.614 2 DEBUG nova.virt.hardware [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.615 2 DEBUG nova.virt.hardware [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.616 2 DEBUG nova.virt.hardware [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.617 2 DEBUG nova.virt.hardware [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.618 2 DEBUG nova.virt.hardware [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.618 2 DEBUG nova.virt.hardware [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.619 2 DEBUG nova.virt.hardware [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.626 2 DEBUG nova.virt.libvirt.vif [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:46:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-593220754',display_name='tempest-TestNetworkBasicOps-server-593220754',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-593220754',id=15,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAdA0CV1iE1DyD2+NLBWRUbC+KEvTji1UYKjnEhbkqnSL5R4AbMkfDcjrUSokm+EuReR67zNn+9SLj7bKG8dpjgHugj7v15d1sDyqRy2qcuHvTc4pBER+eTTE+qxGE4rhw==',key_name='tempest-TestNetworkBasicOps-2075777634',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ed4915cd456424c8ac561ce0da33795',ramdisk_id='',reservation_id='r-xusacnjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1499067436',owner_user_name='tempest-TestNetworkBasicOps-1499067436-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:46:06Z,user_data=None,user_id='4ebdfb48323c4124b435387dfed92c5e',uuid=573d9025-53e1-4cfe-b8ab-d19f024da535,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1f9176b-db23-4517-bd66-1fcfe605084c", "address": "fa:16:3e:f2:8b:1d", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f9176b-db", "ovs_interfaceid": "e1f9176b-db23-4517-bd66-1fcfe605084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.627 2 DEBUG nova.network.os_vif_util [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Converting VIF {"id": "e1f9176b-db23-4517-bd66-1fcfe605084c", "address": "fa:16:3e:f2:8b:1d", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f9176b-db", "ovs_interfaceid": "e1f9176b-db23-4517-bd66-1fcfe605084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.628 2 DEBUG nova.network.os_vif_util [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:8b:1d,bridge_name='br-int',has_traffic_filtering=True,id=e1f9176b-db23-4517-bd66-1fcfe605084c,network=Network(2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f9176b-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.630 2 DEBUG nova.objects.instance [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lazy-loading 'pci_devices' on Instance uuid 573d9025-53e1-4cfe-b8ab-d19f024da535 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.644 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] End _get_guest_xml xml=<domain type="kvm">
Oct 02 19:46:12 compute-0 nova_compute[194781]:   <uuid>573d9025-53e1-4cfe-b8ab-d19f024da535</uuid>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   <name>instance-0000000f</name>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   <memory>131072</memory>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   <vcpu>1</vcpu>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   <metadata>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <nova:name>tempest-TestNetworkBasicOps-server-593220754</nova:name>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <nova:creationTime>2025-10-02 19:46:12</nova:creationTime>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <nova:flavor name="m1.nano">
Oct 02 19:46:12 compute-0 nova_compute[194781]:         <nova:memory>128</nova:memory>
Oct 02 19:46:12 compute-0 nova_compute[194781]:         <nova:disk>1</nova:disk>
Oct 02 19:46:12 compute-0 nova_compute[194781]:         <nova:swap>0</nova:swap>
Oct 02 19:46:12 compute-0 nova_compute[194781]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 19:46:12 compute-0 nova_compute[194781]:         <nova:vcpus>1</nova:vcpus>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       </nova:flavor>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <nova:owner>
Oct 02 19:46:12 compute-0 nova_compute[194781]:         <nova:user uuid="4ebdfb48323c4124b435387dfed92c5e">tempest-TestNetworkBasicOps-1499067436-project-member</nova:user>
Oct 02 19:46:12 compute-0 nova_compute[194781]:         <nova:project uuid="4ed4915cd456424c8ac561ce0da33795">tempest-TestNetworkBasicOps-1499067436</nova:project>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       </nova:owner>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <nova:root type="image" uuid="c191839f-7364-41ce-80c8-eff8077fc750"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <nova:ports>
Oct 02 19:46:12 compute-0 nova_compute[194781]:         <nova:port uuid="e1f9176b-db23-4517-bd66-1fcfe605084c">
Oct 02 19:46:12 compute-0 nova_compute[194781]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:         </nova:port>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       </nova:ports>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     </nova:instance>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   </metadata>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   <sysinfo type="smbios">
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <system>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <entry name="manufacturer">RDO</entry>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <entry name="product">OpenStack Compute</entry>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <entry name="serial">573d9025-53e1-4cfe-b8ab-d19f024da535</entry>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <entry name="uuid">573d9025-53e1-4cfe-b8ab-d19f024da535</entry>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <entry name="family">Virtual Machine</entry>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     </system>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   </sysinfo>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   <os>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <boot dev="hd"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <smbios mode="sysinfo"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   </os>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   <features>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <acpi/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <apic/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <vmcoreinfo/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   </features>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   <clock offset="utc">
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <timer name="hpet" present="no"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   </clock>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   <cpu mode="host-model" match="exact">
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   </cpu>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   <devices>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <disk type="file" device="disk">
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <target dev="vda" bus="virtio"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <disk type="file" device="cdrom">
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <driver name="qemu" type="raw" cache="none"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <source file="/var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk.config"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <target dev="sda" bus="sata"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     </disk>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <interface type="ethernet">
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <mac address="fa:16:3e:f2:8b:1d"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <mtu size="1442"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <target dev="tape1f9176b-db"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     </interface>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <serial type="pty">
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <log file="/var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/console.log" append="off"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     </serial>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <video>
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <model type="virtio"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     </video>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <input type="tablet" bus="usb"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <rng model="virtio">
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <backend model="random">/dev/urandom</backend>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     </rng>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <controller type="usb" index="0"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     <memballoon model="virtio">
Oct 02 19:46:12 compute-0 nova_compute[194781]:       <stats period="10"/>
Oct 02 19:46:12 compute-0 nova_compute[194781]:     </memballoon>
Oct 02 19:46:12 compute-0 nova_compute[194781]:   </devices>
Oct 02 19:46:12 compute-0 nova_compute[194781]: </domain>
Oct 02 19:46:12 compute-0 nova_compute[194781]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.656 2 DEBUG nova.compute.manager [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Preparing to wait for external event network-vif-plugged-e1f9176b-db23-4517-bd66-1fcfe605084c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.656 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.656 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.656 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.657 2 DEBUG nova.virt.libvirt.vif [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T19:46:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-593220754',display_name='tempest-TestNetworkBasicOps-server-593220754',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-593220754',id=15,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAdA0CV1iE1DyD2+NLBWRUbC+KEvTji1UYKjnEhbkqnSL5R4AbMkfDcjrUSokm+EuReR67zNn+9SLj7bKG8dpjgHugj7v15d1sDyqRy2qcuHvTc4pBER+eTTE+qxGE4rhw==',key_name='tempest-TestNetworkBasicOps-2075777634',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ed4915cd456424c8ac561ce0da33795',ramdisk_id='',reservation_id='r-xusacnjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1499067436',owner_user_name='tempest-TestNetworkBasicOps-1499067436-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T19:46:06Z,user_data=None,user_id='4ebdfb48323c4124b435387dfed92c5e',uuid=573d9025-53e1-4cfe-b8ab-d19f024da535,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1f9176b-db23-4517-bd66-1fcfe605084c", "address": "fa:16:3e:f2:8b:1d", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f9176b-db", "ovs_interfaceid": "e1f9176b-db23-4517-bd66-1fcfe605084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.657 2 DEBUG nova.network.os_vif_util [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Converting VIF {"id": "e1f9176b-db23-4517-bd66-1fcfe605084c", "address": "fa:16:3e:f2:8b:1d", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f9176b-db", "ovs_interfaceid": "e1f9176b-db23-4517-bd66-1fcfe605084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.658 2 DEBUG nova.network.os_vif_util [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:8b:1d,bridge_name='br-int',has_traffic_filtering=True,id=e1f9176b-db23-4517-bd66-1fcfe605084c,network=Network(2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f9176b-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.659 2 DEBUG os_vif [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:8b:1d,bridge_name='br-int',has_traffic_filtering=True,id=e1f9176b-db23-4517-bd66-1fcfe605084c,network=Network(2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f9176b-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.660 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.662 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.666 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1f9176b-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.667 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape1f9176b-db, col_values=(('external_ids', {'iface-id': 'e1f9176b-db23-4517-bd66-1fcfe605084c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:8b:1d', 'vm-uuid': '573d9025-53e1-4cfe-b8ab-d19f024da535'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:12 compute-0 NetworkManager[52324]: <info>  [1759434372.6716] manager: (tape1f9176b-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.683 2 INFO os_vif [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:8b:1d,bridge_name='br-int',has_traffic_filtering=True,id=e1f9176b-db23-4517-bd66-1fcfe605084c,network=Network(2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f9176b-db')
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.744 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.745 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.745 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] No VIF found with MAC fa:16:3e:f2:8b:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 19:46:12 compute-0 nova_compute[194781]: 2025-10-02 19:46:12.746 2 INFO nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Using config drive
Oct 02 19:46:13 compute-0 nova_compute[194781]: 2025-10-02 19:46:13.486 2 INFO nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Creating config drive at /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk.config
Oct 02 19:46:13 compute-0 nova_compute[194781]: 2025-10-02 19:46:13.497 2 DEBUG oslo_concurrency.processutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpims0z__k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:13 compute-0 nova_compute[194781]: 2025-10-02 19:46:13.633 2 DEBUG oslo_concurrency.processutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpims0z__k" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:13 compute-0 kernel: tape1f9176b-db: entered promiscuous mode
Oct 02 19:46:13 compute-0 NetworkManager[52324]: <info>  [1759434373.7266] manager: (tape1f9176b-db): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Oct 02 19:46:13 compute-0 ovn_controller[97052]: 2025-10-02T19:46:13Z|00177|binding|INFO|Claiming lport e1f9176b-db23-4517-bd66-1fcfe605084c for this chassis.
Oct 02 19:46:13 compute-0 ovn_controller[97052]: 2025-10-02T19:46:13Z|00178|binding|INFO|e1f9176b-db23-4517-bd66-1fcfe605084c: Claiming fa:16:3e:f2:8b:1d 10.100.0.6
Oct 02 19:46:13 compute-0 nova_compute[194781]: 2025-10-02 19:46:13.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:13 compute-0 ovn_controller[97052]: 2025-10-02T19:46:13Z|00179|binding|INFO|Setting lport e1f9176b-db23-4517-bd66-1fcfe605084c ovn-installed in OVS
Oct 02 19:46:13 compute-0 nova_compute[194781]: 2025-10-02 19:46:13.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:13 compute-0 ovn_controller[97052]: 2025-10-02T19:46:13Z|00180|binding|INFO|Setting lport e1f9176b-db23-4517-bd66-1fcfe605084c up in Southbound
Oct 02 19:46:13 compute-0 podman[261692]: 2025-10-02 19:46:13.765864942 +0000 UTC m=+0.116286462 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:46:13 compute-0 nova_compute[194781]: 2025-10-02 19:46:13.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:13.767 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:8b:1d 10.100.0.6'], port_security=['fa:16:3e:f2:8b:1d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '573d9025-53e1-4cfe-b8ab-d19f024da535', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ed4915cd456424c8ac561ce0da33795', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05fbfb49-f0f2-4924-bcc9-e203d7f5cfa6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4d2301e-c986-4618-9fd9-f3243fb030c9, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=e1f9176b-db23-4517-bd66-1fcfe605084c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:46:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:13.769 105943 INFO neutron.agent.ovn.metadata.agent [-] Port e1f9176b-db23-4517-bd66-1fcfe605084c in datapath 2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8 bound to our chassis
Oct 02 19:46:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:13.772 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8
Oct 02 19:46:13 compute-0 systemd-udevd[261739]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 19:46:13 compute-0 podman[261691]: 2025-10-02 19:46:13.78193372 +0000 UTC m=+0.138273617 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:46:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:13.795 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4f4872-ffda-4a8c-85eb-526fa70b2b71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:13 compute-0 NetworkManager[52324]: <info>  [1759434373.8000] device (tape1f9176b-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 19:46:13 compute-0 NetworkManager[52324]: <info>  [1759434373.8047] device (tape1f9176b-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 19:46:13 compute-0 systemd-machined[154795]: New machine qemu-16-instance-0000000f.
Oct 02 19:46:13 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Oct 02 19:46:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:13.832 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[b0ea8849-224e-4c38-925c-d9a447bf8319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:13.840 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[30a0ba2b-c400-446e-be44-d8a995aaa910]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:13.875 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[343b0b3e-846e-4e1e-bb7f-c9430fc8c039]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:13.895 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[9bfeca2e-a5f8-49df-baea-b98e262f2a4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c6f59f2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:9e:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547403, 'reachable_time': 34196, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261755, 'error': None, 'target': 'ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:13.912 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[c0377b94-abc8-4a51-9162-44b4f97b4d86]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2c6f59f2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547414, 'tstamp': 547414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261756, 'error': None, 'target': 'ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2c6f59f2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547418, 'tstamp': 547418}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261756, 'error': None, 'target': 'ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:13.914 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c6f59f2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:46:13 compute-0 nova_compute[194781]: 2025-10-02 19:46:13.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:13 compute-0 nova_compute[194781]: 2025-10-02 19:46:13.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:13.918 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c6f59f2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:46:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:13.918 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:46:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:13.919 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c6f59f2-a0, col_values=(('external_ids', {'iface-id': 'fb07e353-d679-475b-a1f5-b73dcea986a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:46:13 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:13.919 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.605 2 DEBUG nova.compute.manager [req-1ceee62e-3e75-4234-bbf1-97ee60daa606 req-05766021-1149-4c4b-ac60-ade2ed4901d3 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Received event network-vif-plugged-e1f9176b-db23-4517-bd66-1fcfe605084c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.606 2 DEBUG oslo_concurrency.lockutils [req-1ceee62e-3e75-4234-bbf1-97ee60daa606 req-05766021-1149-4c4b-ac60-ade2ed4901d3 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.607 2 DEBUG oslo_concurrency.lockutils [req-1ceee62e-3e75-4234-bbf1-97ee60daa606 req-05766021-1149-4c4b-ac60-ade2ed4901d3 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.609 2 DEBUG oslo_concurrency.lockutils [req-1ceee62e-3e75-4234-bbf1-97ee60daa606 req-05766021-1149-4c4b-ac60-ade2ed4901d3 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.611 2 DEBUG nova.compute.manager [req-1ceee62e-3e75-4234-bbf1-97ee60daa606 req-05766021-1149-4c4b-ac60-ade2ed4901d3 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Processing event network-vif-plugged-e1f9176b-db23-4517-bd66-1fcfe605084c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.802 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434374.8020895, 573d9025-53e1-4cfe-b8ab-d19f024da535 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.803 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] VM Started (Lifecycle Event)
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.806 2 DEBUG nova.compute.manager [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.811 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.818 2 INFO nova.virt.libvirt.driver [-] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Instance spawned successfully.
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.819 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.828 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.847 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.856 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.857 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.858 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.858 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.859 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.860 2 DEBUG nova.virt.libvirt.driver [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.887 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.889 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434374.8022826, 573d9025-53e1-4cfe-b8ab-d19f024da535 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.890 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] VM Paused (Lifecycle Event)
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.927 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.932 2 DEBUG nova.virt.driver [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] Emitting event <LifecycleEvent: 1759434374.8101265, 573d9025-53e1-4cfe-b8ab-d19f024da535 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.932 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] VM Resumed (Lifecycle Event)
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.944 2 INFO nova.compute.manager [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Took 8.62 seconds to spawn the instance on the hypervisor.
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.945 2 DEBUG nova.compute.manager [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.952 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.963 2 DEBUG nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 19:46:14 compute-0 nova_compute[194781]: 2025-10-02 19:46:14.993 2 INFO nova.compute.manager [None req-cac76cc9-1fdc-4e86-9807-ac4e8267212c - - - - - -] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 19:46:15 compute-0 nova_compute[194781]: 2025-10-02 19:46:15.035 2 INFO nova.compute.manager [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Took 9.21 seconds to build instance.
Oct 02 19:46:15 compute-0 nova_compute[194781]: 2025-10-02 19:46:15.060 2 DEBUG oslo_concurrency.lockutils [None req-42577397-56fc-477b-ad0b-7efac1b08402 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:15 compute-0 nova_compute[194781]: 2025-10-02 19:46:15.344 2 DEBUG nova.network.neutron [req-c6af6c67-8f36-43bb-8bd2-2c24991b5c15 req-dcf2b298-7ca3-4fc0-b8fe-46715987edc2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Updated VIF entry in instance network info cache for port e1f9176b-db23-4517-bd66-1fcfe605084c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:46:15 compute-0 nova_compute[194781]: 2025-10-02 19:46:15.344 2 DEBUG nova.network.neutron [req-c6af6c67-8f36-43bb-8bd2-2c24991b5c15 req-dcf2b298-7ca3-4fc0-b8fe-46715987edc2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Updating instance_info_cache with network_info: [{"id": "e1f9176b-db23-4517-bd66-1fcfe605084c", "address": "fa:16:3e:f2:8b:1d", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f9176b-db", "ovs_interfaceid": "e1f9176b-db23-4517-bd66-1fcfe605084c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:46:15 compute-0 nova_compute[194781]: 2025-10-02 19:46:15.427 2 DEBUG oslo_concurrency.lockutils [req-c6af6c67-8f36-43bb-8bd2-2c24991b5c15 req-dcf2b298-7ca3-4fc0-b8fe-46715987edc2 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-573d9025-53e1-4cfe-b8ab-d19f024da535" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:46:15 compute-0 podman[261766]: 2025-10-02 19:46:15.721018905 +0000 UTC m=+0.089052446 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, release-0.7.12=, vendor=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4)
Oct 02 19:46:15 compute-0 podman[261765]: 2025-10-02 19:46:15.745346553 +0000 UTC m=+0.113214270 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Oct 02 19:46:15 compute-0 podman[261767]: 2025-10-02 19:46:15.782844323 +0000 UTC m=+0.137005294 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct 02 19:46:16 compute-0 nova_compute[194781]: 2025-10-02 19:46:16.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:16 compute-0 nova_compute[194781]: 2025-10-02 19:46:16.703 2 DEBUG nova.compute.manager [req-ee3003d8-f818-46c2-8b5c-08cc11dcf9b1 req-0fd63e58-35cf-45d5-ba35-bcb055b5cef4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Received event network-vif-plugged-e1f9176b-db23-4517-bd66-1fcfe605084c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:46:16 compute-0 nova_compute[194781]: 2025-10-02 19:46:16.704 2 DEBUG oslo_concurrency.lockutils [req-ee3003d8-f818-46c2-8b5c-08cc11dcf9b1 req-0fd63e58-35cf-45d5-ba35-bcb055b5cef4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:16 compute-0 nova_compute[194781]: 2025-10-02 19:46:16.705 2 DEBUG oslo_concurrency.lockutils [req-ee3003d8-f818-46c2-8b5c-08cc11dcf9b1 req-0fd63e58-35cf-45d5-ba35-bcb055b5cef4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:16 compute-0 nova_compute[194781]: 2025-10-02 19:46:16.705 2 DEBUG oslo_concurrency.lockutils [req-ee3003d8-f818-46c2-8b5c-08cc11dcf9b1 req-0fd63e58-35cf-45d5-ba35-bcb055b5cef4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:16 compute-0 nova_compute[194781]: 2025-10-02 19:46:16.706 2 DEBUG nova.compute.manager [req-ee3003d8-f818-46c2-8b5c-08cc11dcf9b1 req-0fd63e58-35cf-45d5-ba35-bcb055b5cef4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] No waiting events found dispatching network-vif-plugged-e1f9176b-db23-4517-bd66-1fcfe605084c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:46:16 compute-0 nova_compute[194781]: 2025-10-02 19:46:16.706 2 WARNING nova.compute.manager [req-ee3003d8-f818-46c2-8b5c-08cc11dcf9b1 req-0fd63e58-35cf-45d5-ba35-bcb055b5cef4 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Received unexpected event network-vif-plugged-e1f9176b-db23-4517-bd66-1fcfe605084c for instance with vm_state active and task_state None.
Oct 02 19:46:17 compute-0 nova_compute[194781]: 2025-10-02 19:46:17.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:18 compute-0 nova_compute[194781]: 2025-10-02 19:46:18.775 2 DEBUG nova.compute.manager [req-62ce0ed2-13a5-4935-ba7b-8b18b0ed757a req-976e5018-f8c6-46ae-857e-7ffc2241ef69 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Received event network-changed-e1f9176b-db23-4517-bd66-1fcfe605084c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:46:18 compute-0 nova_compute[194781]: 2025-10-02 19:46:18.776 2 DEBUG nova.compute.manager [req-62ce0ed2-13a5-4935-ba7b-8b18b0ed757a req-976e5018-f8c6-46ae-857e-7ffc2241ef69 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Refreshing instance network info cache due to event network-changed-e1f9176b-db23-4517-bd66-1fcfe605084c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 19:46:18 compute-0 nova_compute[194781]: 2025-10-02 19:46:18.776 2 DEBUG oslo_concurrency.lockutils [req-62ce0ed2-13a5-4935-ba7b-8b18b0ed757a req-976e5018-f8c6-46ae-857e-7ffc2241ef69 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "refresh_cache-573d9025-53e1-4cfe-b8ab-d19f024da535" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:46:18 compute-0 nova_compute[194781]: 2025-10-02 19:46:18.777 2 DEBUG oslo_concurrency.lockutils [req-62ce0ed2-13a5-4935-ba7b-8b18b0ed757a req-976e5018-f8c6-46ae-857e-7ffc2241ef69 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquired lock "refresh_cache-573d9025-53e1-4cfe-b8ab-d19f024da535" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:46:18 compute-0 nova_compute[194781]: 2025-10-02 19:46:18.777 2 DEBUG nova.network.neutron [req-62ce0ed2-13a5-4935-ba7b-8b18b0ed757a req-976e5018-f8c6-46ae-857e-7ffc2241ef69 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Refreshing network info cache for port e1f9176b-db23-4517-bd66-1fcfe605084c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 19:46:20 compute-0 nova_compute[194781]: 2025-10-02 19:46:20.004 2 DEBUG nova.network.neutron [req-62ce0ed2-13a5-4935-ba7b-8b18b0ed757a req-976e5018-f8c6-46ae-857e-7ffc2241ef69 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Updated VIF entry in instance network info cache for port e1f9176b-db23-4517-bd66-1fcfe605084c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 19:46:20 compute-0 nova_compute[194781]: 2025-10-02 19:46:20.005 2 DEBUG nova.network.neutron [req-62ce0ed2-13a5-4935-ba7b-8b18b0ed757a req-976e5018-f8c6-46ae-857e-7ffc2241ef69 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Updating instance_info_cache with network_info: [{"id": "e1f9176b-db23-4517-bd66-1fcfe605084c", "address": "fa:16:3e:f2:8b:1d", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f9176b-db", "ovs_interfaceid": "e1f9176b-db23-4517-bd66-1fcfe605084c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:46:20 compute-0 nova_compute[194781]: 2025-10-02 19:46:20.029 2 DEBUG oslo_concurrency.lockutils [req-62ce0ed2-13a5-4935-ba7b-8b18b0ed757a req-976e5018-f8c6-46ae-857e-7ffc2241ef69 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Releasing lock "refresh_cache-573d9025-53e1-4cfe-b8ab-d19f024da535" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:46:21 compute-0 nova_compute[194781]: 2025-10-02 19:46:21.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:21 compute-0 ovn_controller[97052]: 2025-10-02T19:46:21Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:57:cd 10.100.0.62
Oct 02 19:46:21 compute-0 ovn_controller[97052]: 2025-10-02T19:46:21Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:57:cd 10.100.0.62
Oct 02 19:46:22 compute-0 nova_compute[194781]: 2025-10-02 19:46:22.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:24 compute-0 podman[261832]: 2025-10-02 19:46:24.741913878 +0000 UTC m=+0.109131471 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 19:46:24 compute-0 podman[261831]: 2025-10-02 19:46:24.77046054 +0000 UTC m=+0.127616804 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:46:26 compute-0 nova_compute[194781]: 2025-10-02 19:46:26.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:27 compute-0 nova_compute[194781]: 2025-10-02 19:46:27.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:27 compute-0 podman[261870]: 2025-10-02 19:46:27.748732282 +0000 UTC m=+0.115348736 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:46:27 compute-0 podman[261871]: 2025-10-02 19:46:27.769168307 +0000 UTC m=+0.134572739 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 19:46:29 compute-0 podman[209015]: time="2025-10-02T19:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:46:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34447 "" "Go-http-client/1.1"
Oct 02 19:46:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6156 "" "Go-http-client/1.1"
Oct 02 19:46:31 compute-0 nova_compute[194781]: 2025-10-02 19:46:31.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:31 compute-0 openstack_network_exporter[211160]: ERROR   19:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:46:31 compute-0 openstack_network_exporter[211160]: ERROR   19:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:46:31 compute-0 openstack_network_exporter[211160]: ERROR   19:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:46:31 compute-0 openstack_network_exporter[211160]: ERROR   19:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:46:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:46:31 compute-0 openstack_network_exporter[211160]: ERROR   19:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:46:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:46:32 compute-0 nova_compute[194781]: 2025-10-02 19:46:32.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:33 compute-0 podman[261912]: 2025-10-02 19:46:33.776913659 +0000 UTC m=+0.145952042 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:46:34 compute-0 nova_compute[194781]: 2025-10-02 19:46:34.050 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:36 compute-0 nova_compute[194781]: 2025-10-02 19:46:36.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:36 compute-0 nova_compute[194781]: 2025-10-02 19:46:36.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:37 compute-0 nova_compute[194781]: 2025-10-02 19:46:37.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:37 compute-0 nova_compute[194781]: 2025-10-02 19:46:37.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:46:37 compute-0 nova_compute[194781]: 2025-10-02 19:46:37.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.091 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.098 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.099 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.099 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.261 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.343 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.348 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.409 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.421 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.486 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.487 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.563 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.571 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.640 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.641 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.713 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.719 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.787 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.788 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.865 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.867 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.934 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.935 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:40 compute-0 nova_compute[194781]: 2025-10-02 19:46:40.995 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.008 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.075 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.076 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.144 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.796 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.801 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4460MB free_disk=72.32378387451172GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.803 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.804 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.959 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.960 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.960 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 77c85795-42d5-4ba9-bbb5-b7009b5f992f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.961 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance ead9703a-68cd-4f65-a0dd-296c0a357b90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.961 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 573d9025-53e1-4cfe-b8ab-d19f024da535 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.961 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:46:41 compute-0 nova_compute[194781]: 2025-10-02 19:46:41.961 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:46:42 compute-0 nova_compute[194781]: 2025-10-02 19:46:42.109 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:46:42 compute-0 nova_compute[194781]: 2025-10-02 19:46:42.125 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:46:42 compute-0 nova_compute[194781]: 2025-10-02 19:46:42.153 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:46:42 compute-0 nova_compute[194781]: 2025-10-02 19:46:42.154 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:42 compute-0 nova_compute[194781]: 2025-10-02 19:46:42.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:43 compute-0 nova_compute[194781]: 2025-10-02 19:46:43.154 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:43 compute-0 nova_compute[194781]: 2025-10-02 19:46:43.154 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:43 compute-0 nova_compute[194781]: 2025-10-02 19:46:43.155 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:44 compute-0 nova_compute[194781]: 2025-10-02 19:46:44.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:44 compute-0 podman[261984]: 2025-10-02 19:46:44.751783084 +0000 UTC m=+0.129344619 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct 02 19:46:44 compute-0 podman[261985]: 2025-10-02 19:46:44.78126316 +0000 UTC m=+0.135577175 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:46:46 compute-0 ovn_controller[97052]: 2025-10-02T19:46:46Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:8b:1d 10.100.0.6
Oct 02 19:46:46 compute-0 ovn_controller[97052]: 2025-10-02T19:46:46Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:8b:1d 10.100.0.6
Oct 02 19:46:46 compute-0 nova_compute[194781]: 2025-10-02 19:46:46.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:46 compute-0 podman[262024]: 2025-10-02 19:46:46.73667827 +0000 UTC m=+0.087649068 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:46:46 compute-0 podman[262023]: 2025-10-02 19:46:46.745362952 +0000 UTC m=+0.109271475 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, architecture=x86_64, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, build-date=2024-09-18T21:23:30, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, config_id=edpm)
Oct 02 19:46:46 compute-0 podman[262022]: 2025-10-02 19:46:46.754509325 +0000 UTC m=+0.119550418 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1755695350, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 19:46:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:47.493 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:47.494 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:47.495 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:47 compute-0 nova_compute[194781]: 2025-10-02 19:46:47.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:47 compute-0 ovn_controller[97052]: 2025-10-02T19:46:47Z|00181|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.073 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.310 2 INFO nova.compute.manager [None req-a5774388-59b3-4735-87a2-b82f1f0c5ef1 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Get console output
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.321 52 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.688 2 DEBUG oslo_concurrency.lockutils [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "573d9025-53e1-4cfe-b8ab-d19f024da535" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.689 2 DEBUG oslo_concurrency.lockutils [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.690 2 DEBUG oslo_concurrency.lockutils [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.691 2 DEBUG oslo_concurrency.lockutils [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.692 2 DEBUG oslo_concurrency.lockutils [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.695 2 INFO nova.compute.manager [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Terminating instance
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.698 2 DEBUG nova.compute.manager [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:46:51 compute-0 kernel: tape1f9176b-db (unregistering): left promiscuous mode
Oct 02 19:46:51 compute-0 ovn_controller[97052]: 2025-10-02T19:46:51Z|00182|binding|INFO|Releasing lport e1f9176b-db23-4517-bd66-1fcfe605084c from this chassis (sb_readonly=0)
Oct 02 19:46:51 compute-0 ovn_controller[97052]: 2025-10-02T19:46:51Z|00183|binding|INFO|Setting lport e1f9176b-db23-4517-bd66-1fcfe605084c down in Southbound
Oct 02 19:46:51 compute-0 ovn_controller[97052]: 2025-10-02T19:46:51Z|00184|binding|INFO|Removing iface tape1f9176b-db ovn-installed in OVS
Oct 02 19:46:51 compute-0 NetworkManager[52324]: <info>  [1759434411.7577] device (tape1f9176b-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:51 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:51.780 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:8b:1d 10.100.0.6'], port_security=['fa:16:3e:f2:8b:1d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '573d9025-53e1-4cfe-b8ab-d19f024da535', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ed4915cd456424c8ac561ce0da33795', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05fbfb49-f0f2-4924-bcc9-e203d7f5cfa6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4d2301e-c986-4618-9fd9-f3243fb030c9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=e1f9176b-db23-4517-bd66-1fcfe605084c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:51 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:51.782 105943 INFO neutron.agent.ovn.metadata.agent [-] Port e1f9176b-db23-4517-bd66-1fcfe605084c in datapath 2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8 unbound from our chassis
Oct 02 19:46:51 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:51.789 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8
Oct 02 19:46:51 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:51.811 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[9c79fa53-1b52-47f1-a8f4-1d038159202e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:51 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct 02 19:46:51 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 32.800s CPU time.
Oct 02 19:46:51 compute-0 systemd-machined[154795]: Machine qemu-16-instance-0000000f terminated.
Oct 02 19:46:51 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:51.861 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[0c87d6f8-e500-472d-b04a-4da5fd724ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:51 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:51.867 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[3585dfe3-de6d-472d-8779-6e750de765f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:51 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:51.907 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[162719bc-a1a4-440a-af3d-8c7622356e98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:51 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:51.956 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[69457ba5-c214-422d-95a6-2d8e8bfffb4e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c6f59f2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:9e:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547403, 'reachable_time': 34196, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262104, 'error': None, 'target': 'ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.977 2 INFO nova.virt.libvirt.driver [-] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Instance destroyed successfully.
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.978 2 DEBUG nova.objects.instance [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lazy-loading 'resources' on Instance uuid 573d9025-53e1-4cfe-b8ab-d19f024da535 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:46:51 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:51.982 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0f21df-5092-4a7b-9c0f-4381426e4132]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2c6f59f2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547414, 'tstamp': 547414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262118, 'error': None, 'target': 'ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2c6f59f2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547418, 'tstamp': 547418}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262118, 'error': None, 'target': 'ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:51 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:51.984 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c6f59f2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:51 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:51.995 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c6f59f2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:46:51 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:51.995 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:51 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:51.995 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c6f59f2-a0, col_values=(('external_ids', {'iface-id': 'fb07e353-d679-475b-a1f5-b73dcea986a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:46:51 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:51.996 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.998 2 DEBUG nova.virt.libvirt.vif [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:46:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-593220754',display_name='tempest-TestNetworkBasicOps-server-593220754',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-593220754',id=15,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAdA0CV1iE1DyD2+NLBWRUbC+KEvTji1UYKjnEhbkqnSL5R4AbMkfDcjrUSokm+EuReR67zNn+9SLj7bKG8dpjgHugj7v15d1sDyqRy2qcuHvTc4pBER+eTTE+qxGE4rhw==',key_name='tempest-TestNetworkBasicOps-2075777634',keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:46:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ed4915cd456424c8ac561ce0da33795',ramdisk_id='',reservation_id='r-xusacnjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1499067436',owner_user_name='tempest-TestNetworkBasicOps-1499067436-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:46:15Z,user_data=None,user_id='4ebdfb48323c4124b435387dfed92c5e',uuid=573d9025-53e1-4cfe-b8ab-d19f024da535,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e1f9176b-db23-4517-bd66-1fcfe605084c", "address": "fa:16:3e:f2:8b:1d", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f9176b-db", "ovs_interfaceid": "e1f9176b-db23-4517-bd66-1fcfe605084c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:46:51 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.998 2 DEBUG nova.network.os_vif_util [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Converting VIF {"id": "e1f9176b-db23-4517-bd66-1fcfe605084c", "address": "fa:16:3e:f2:8b:1d", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f9176b-db", "ovs_interfaceid": "e1f9176b-db23-4517-bd66-1fcfe605084c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:51.999 2 DEBUG nova.network.os_vif_util [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:8b:1d,bridge_name='br-int',has_traffic_filtering=True,id=e1f9176b-db23-4517-bd66-1fcfe605084c,network=Network(2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f9176b-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.000 2 DEBUG os_vif [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:8b:1d,bridge_name='br-int',has_traffic_filtering=True,id=e1f9176b-db23-4517-bd66-1fcfe605084c,network=Network(2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f9176b-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.002 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1f9176b-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.008 2 INFO os_vif [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:8b:1d,bridge_name='br-int',has_traffic_filtering=True,id=e1f9176b-db23-4517-bd66-1fcfe605084c,network=Network(2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f9176b-db')
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.009 2 INFO nova.virt.libvirt.driver [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Deleting instance files /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535_del
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.010 2 INFO nova.virt.libvirt.driver [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Deletion of /var/lib/nova/instances/573d9025-53e1-4cfe-b8ab-d19f024da535_del complete
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.060 2 INFO nova.compute.manager [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Took 0.36 seconds to destroy the instance on the hypervisor.
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.061 2 DEBUG oslo.service.loopingcall [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.062 2 DEBUG nova.compute.manager [-] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.062 2 DEBUG nova.network.neutron [-] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.068 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.599 2 DEBUG nova.compute.manager [req-e5d241aa-1326-4528-863b-92222e1dd1c5 req-19b86ef0-37cd-45ff-b1f3-29ad0f6ca137 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Received event network-vif-unplugged-e1f9176b-db23-4517-bd66-1fcfe605084c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.600 2 DEBUG oslo_concurrency.lockutils [req-e5d241aa-1326-4528-863b-92222e1dd1c5 req-19b86ef0-37cd-45ff-b1f3-29ad0f6ca137 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.600 2 DEBUG oslo_concurrency.lockutils [req-e5d241aa-1326-4528-863b-92222e1dd1c5 req-19b86ef0-37cd-45ff-b1f3-29ad0f6ca137 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.601 2 DEBUG oslo_concurrency.lockutils [req-e5d241aa-1326-4528-863b-92222e1dd1c5 req-19b86ef0-37cd-45ff-b1f3-29ad0f6ca137 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.601 2 DEBUG nova.compute.manager [req-e5d241aa-1326-4528-863b-92222e1dd1c5 req-19b86ef0-37cd-45ff-b1f3-29ad0f6ca137 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] No waiting events found dispatching network-vif-unplugged-e1f9176b-db23-4517-bd66-1fcfe605084c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.602 2 DEBUG nova.compute.manager [req-e5d241aa-1326-4528-863b-92222e1dd1c5 req-19b86ef0-37cd-45ff-b1f3-29ad0f6ca137 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Received event network-vif-unplugged-e1f9176b-db23-4517-bd66-1fcfe605084c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 19:46:52 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:52.944 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:46:52 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:52.945 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.972 2 DEBUG nova.network.neutron [-] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:46:52 compute-0 nova_compute[194781]: 2025-10-02 19:46:52.992 2 INFO nova.compute.manager [-] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Took 0.93 seconds to deallocate network for instance.
Oct 02 19:46:53 compute-0 nova_compute[194781]: 2025-10-02 19:46:53.044 2 DEBUG nova.compute.manager [req-1630d6a3-5387-433d-a4de-1c31e347c4c3 req-b94ff8be-8dd8-4ccb-8d5d-69494e8f42f9 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Received event network-vif-deleted-e1f9176b-db23-4517-bd66-1fcfe605084c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:46:53 compute-0 nova_compute[194781]: 2025-10-02 19:46:53.049 2 DEBUG oslo_concurrency.lockutils [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:53 compute-0 nova_compute[194781]: 2025-10-02 19:46:53.050 2 DEBUG oslo_concurrency.lockutils [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:53 compute-0 nova_compute[194781]: 2025-10-02 19:46:53.200 2 DEBUG nova.compute.provider_tree [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:46:53 compute-0 nova_compute[194781]: 2025-10-02 19:46:53.224 2 DEBUG nova.scheduler.client.report [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:46:53 compute-0 nova_compute[194781]: 2025-10-02 19:46:53.254 2 DEBUG oslo_concurrency.lockutils [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:53 compute-0 nova_compute[194781]: 2025-10-02 19:46:53.297 2 INFO nova.scheduler.client.report [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Deleted allocations for instance 573d9025-53e1-4cfe-b8ab-d19f024da535
Oct 02 19:46:53 compute-0 nova_compute[194781]: 2025-10-02 19:46:53.355 2 DEBUG oslo_concurrency.lockutils [None req-ac677567-ff87-4c09-aa6c-e2aa85108758 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:54 compute-0 nova_compute[194781]: 2025-10-02 19:46:54.681 2 DEBUG nova.compute.manager [req-d3ce1fcd-47e1-41f0-8afc-6ab023492bac req-628dfbc7-d041-407f-9d97-b0e22e794d6b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Received event network-vif-plugged-e1f9176b-db23-4517-bd66-1fcfe605084c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:46:54 compute-0 nova_compute[194781]: 2025-10-02 19:46:54.682 2 DEBUG oslo_concurrency.lockutils [req-d3ce1fcd-47e1-41f0-8afc-6ab023492bac req-628dfbc7-d041-407f-9d97-b0e22e794d6b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:54 compute-0 nova_compute[194781]: 2025-10-02 19:46:54.683 2 DEBUG oslo_concurrency.lockutils [req-d3ce1fcd-47e1-41f0-8afc-6ab023492bac req-628dfbc7-d041-407f-9d97-b0e22e794d6b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:54 compute-0 nova_compute[194781]: 2025-10-02 19:46:54.684 2 DEBUG oslo_concurrency.lockutils [req-d3ce1fcd-47e1-41f0-8afc-6ab023492bac req-628dfbc7-d041-407f-9d97-b0e22e794d6b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "573d9025-53e1-4cfe-b8ab-d19f024da535-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:54 compute-0 nova_compute[194781]: 2025-10-02 19:46:54.685 2 DEBUG nova.compute.manager [req-d3ce1fcd-47e1-41f0-8afc-6ab023492bac req-628dfbc7-d041-407f-9d97-b0e22e794d6b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] No waiting events found dispatching network-vif-plugged-e1f9176b-db23-4517-bd66-1fcfe605084c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:46:54 compute-0 nova_compute[194781]: 2025-10-02 19:46:54.686 2 WARNING nova.compute.manager [req-d3ce1fcd-47e1-41f0-8afc-6ab023492bac req-628dfbc7-d041-407f-9d97-b0e22e794d6b fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Received unexpected event network-vif-plugged-e1f9176b-db23-4517-bd66-1fcfe605084c for instance with vm_state deleted and task_state None.
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.542 2 DEBUG oslo_concurrency.lockutils [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.543 2 DEBUG oslo_concurrency.lockutils [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.544 2 DEBUG oslo_concurrency.lockutils [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.544 2 DEBUG oslo_concurrency.lockutils [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.545 2 DEBUG oslo_concurrency.lockutils [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.546 2 INFO nova.compute.manager [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Terminating instance
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.547 2 DEBUG nova.compute.manager [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:46:55 compute-0 kernel: tap603f706b-6b (unregistering): left promiscuous mode
Oct 02 19:46:55 compute-0 NetworkManager[52324]: <info>  [1759434415.6051] device (tap603f706b-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:46:55 compute-0 ovn_controller[97052]: 2025-10-02T19:46:55Z|00185|binding|INFO|Releasing lport 603f706b-6b06-4ad2-b22b-b118c9d68755 from this chassis (sb_readonly=0)
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:55 compute-0 ovn_controller[97052]: 2025-10-02T19:46:55Z|00186|binding|INFO|Setting lport 603f706b-6b06-4ad2-b22b-b118c9d68755 down in Southbound
Oct 02 19:46:55 compute-0 ovn_controller[97052]: 2025-10-02T19:46:55Z|00187|binding|INFO|Removing iface tap603f706b-6b ovn-installed in OVS
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:55 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:55.623 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:b6:fc 10.100.0.11'], port_security=['fa:16:3e:72:b6:fc 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '77c85795-42d5-4ba9-bbb5-b7009b5f992f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ed4915cd456424c8ac561ce0da33795', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb0a552f-0bf7-41d1-8336-c4db68805f5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4d2301e-c986-4618-9fd9-f3243fb030c9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=603f706b-6b06-4ad2-b22b-b118c9d68755) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:46:55 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:55.624 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 603f706b-6b06-4ad2-b22b-b118c9d68755 in datapath 2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8 unbound from our chassis
Oct 02 19:46:55 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:55.625 105943 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 19:46:55 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:55.627 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[723fd564-1e26-4d46-802a-5fa206435000]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:55 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:55.628 105943 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8 namespace which is not needed anymore
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:55 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 02 19:46:55 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 43.989s CPU time.
Oct 02 19:46:55 compute-0 systemd-machined[154795]: Machine qemu-14-instance-0000000d terminated.
Oct 02 19:46:55 compute-0 podman[262126]: 2025-10-02 19:46:55.730358356 +0000 UTC m=+0.097722057 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 19:46:55 compute-0 podman[262125]: 2025-10-02 19:46:55.75490326 +0000 UTC m=+0.123367431 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.821 2 INFO nova.virt.libvirt.driver [-] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Instance destroyed successfully.
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.822 2 DEBUG nova.objects.instance [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lazy-loading 'resources' on Instance uuid 77c85795-42d5-4ba9-bbb5-b7009b5f992f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:46:55 compute-0 neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8[261214]: [NOTICE]   (261250) : haproxy version is 2.8.14-c23fe91
Oct 02 19:46:55 compute-0 neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8[261214]: [NOTICE]   (261250) : path to executable is /usr/sbin/haproxy
Oct 02 19:46:55 compute-0 neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8[261214]: [WARNING]  (261250) : Exiting Master process...
Oct 02 19:46:55 compute-0 neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8[261214]: [WARNING]  (261250) : Exiting Master process...
Oct 02 19:46:55 compute-0 neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8[261214]: [ALERT]    (261250) : Current worker (261252) exited with code 143 (Terminated)
Oct 02 19:46:55 compute-0 neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8[261214]: [WARNING]  (261250) : All workers exited. Exiting... (0)
Oct 02 19:46:55 compute-0 systemd[1]: libpod-5f53fc8b8974a38d9a21e854d19865f1be5d638c16b839a1102cbf42e990baee.scope: Deactivated successfully.
Oct 02 19:46:55 compute-0 podman[262184]: 2025-10-02 19:46:55.838945351 +0000 UTC m=+0.067381178 container died 5f53fc8b8974a38d9a21e854d19865f1be5d638c16b839a1102cbf42e990baee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.842 2 DEBUG nova.virt.libvirt.vif [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:45:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1268837722',display_name='tempest-TestNetworkBasicOps-server-1268837722',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1268837722',id=13,image_ref='c191839f-7364-41ce-80c8-eff8077fc750',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDrc+P0gsZQh0uGJUnr3zYP2O1zW9xC5+fi4i/ADlGpzcztyBgA5/6BS7XO85nY74cc89ZtOchpc4l7DeCBBR4+8aE6DrVwzE9zO6adBQFT2VqIAiIf8DphwMa6Q/KJOlg==',key_name='tempest-TestNetworkBasicOps-1819364670',keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:45:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ed4915cd456424c8ac561ce0da33795',ramdisk_id='',reservation_id='r-6c4geriv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c191839f-7364-41ce-80c8-eff8077fc750',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1499067436',owner_user_name='tempest-TestNetworkBasicOps-1499067436-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:45:16Z,user_data=None,user_id='4ebdfb48323c4124b435387dfed92c5e',uuid=77c85795-42d5-4ba9-bbb5-b7009b5f992f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "603f706b-6b06-4ad2-b22b-b118c9d68755", "address": "fa:16:3e:72:b6:fc", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap603f706b-6b", "ovs_interfaceid": "603f706b-6b06-4ad2-b22b-b118c9d68755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.843 2 DEBUG nova.network.os_vif_util [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Converting VIF {"id": "603f706b-6b06-4ad2-b22b-b118c9d68755", "address": "fa:16:3e:72:b6:fc", "network": {"id": "2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8", "bridge": "br-int", "label": "tempest-network-smoke--1153503952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ed4915cd456424c8ac561ce0da33795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap603f706b-6b", "ovs_interfaceid": "603f706b-6b06-4ad2-b22b-b118c9d68755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.844 2 DEBUG nova.network.os_vif_util [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:72:b6:fc,bridge_name='br-int',has_traffic_filtering=True,id=603f706b-6b06-4ad2-b22b-b118c9d68755,network=Network(2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap603f706b-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.845 2 DEBUG os_vif [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:b6:fc,bridge_name='br-int',has_traffic_filtering=True,id=603f706b-6b06-4ad2-b22b-b118c9d68755,network=Network(2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap603f706b-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.847 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap603f706b-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.853 2 INFO os_vif [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:b6:fc,bridge_name='br-int',has_traffic_filtering=True,id=603f706b-6b06-4ad2-b22b-b118c9d68755,network=Network(2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap603f706b-6b')
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.854 2 INFO nova.virt.libvirt.driver [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Deleting instance files /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f_del
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.854 2 INFO nova.virt.libvirt.driver [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Deletion of /var/lib/nova/instances/77c85795-42d5-4ba9-bbb5-b7009b5f992f_del complete
Oct 02 19:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f53fc8b8974a38d9a21e854d19865f1be5d638c16b839a1102cbf42e990baee-userdata-shm.mount: Deactivated successfully.
Oct 02 19:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b452ef326dbb0051b42ecf4a020799b74a9f474e60521b4b58b79ddec358c38-merged.mount: Deactivated successfully.
Oct 02 19:46:55 compute-0 podman[262184]: 2025-10-02 19:46:55.893160267 +0000 UTC m=+0.121596084 container cleanup 5f53fc8b8974a38d9a21e854d19865f1be5d638c16b839a1102cbf42e990baee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 19:46:55 compute-0 systemd[1]: libpod-conmon-5f53fc8b8974a38d9a21e854d19865f1be5d638c16b839a1102cbf42e990baee.scope: Deactivated successfully.
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.925 2 INFO nova.compute.manager [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Took 0.38 seconds to destroy the instance on the hypervisor.
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.926 2 DEBUG oslo.service.loopingcall [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.926 2 DEBUG nova.compute.manager [-] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.927 2 DEBUG nova.network.neutron [-] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:46:55 compute-0 podman[262229]: 2025-10-02 19:46:55.971849755 +0000 UTC m=+0.052200383 container remove 5f53fc8b8974a38d9a21e854d19865f1be5d638c16b839a1102cbf42e990baee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:46:55 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:55.978 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[0838006a-fc12-4c27-8f78-71a4984b7493]: (4, ('Thu Oct  2 07:46:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8 (5f53fc8b8974a38d9a21e854d19865f1be5d638c16b839a1102cbf42e990baee)\n5f53fc8b8974a38d9a21e854d19865f1be5d638c16b839a1102cbf42e990baee\nThu Oct  2 07:46:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8 (5f53fc8b8974a38d9a21e854d19865f1be5d638c16b839a1102cbf42e990baee)\n5f53fc8b8974a38d9a21e854d19865f1be5d638c16b839a1102cbf42e990baee\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:55 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:55.980 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9a64a1-43e8-4d6f-baa1-5012cf59ddb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:55 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:55.981 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c6f59f2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:46:55 compute-0 kernel: tap2c6f59f2-a0: left promiscuous mode
Oct 02 19:46:55 compute-0 nova_compute[194781]: 2025-10-02 19:46:55.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:56.001 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[0ccf68ef-5112-4ac2-84a3-6f50851165e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:56.024 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[ceb6c1b6-ccd5-42f6-b2b3-cbbf6a4a6c58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:56.025 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7fad10-ece2-4745-82dd-f59628d01f37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:56.047 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[c0695465-1cf3-4469-9c0e-ccd3ef0d16d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547395, 'reachable_time': 18674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262244, 'error': None, 'target': 'ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:56.050 106060 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2c6f59f2-ae9a-4b33-b99c-5fe25d9484e8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 19:46:56 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:46:56.050 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[973e0ec3-7104-4ec4-81b7-1e9476034c42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:46:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d2c6f59f2\x2dae9a\x2d4b33\x2db99c\x2d5fe25d9484e8.mount: Deactivated successfully.
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.737 2 DEBUG nova.network.neutron [-] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.757 2 INFO nova.compute.manager [-] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Took 0.83 seconds to deallocate network for instance.
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.777 2 DEBUG nova.compute.manager [req-bb253aba-b6fc-4ac0-8545-4e8fd4610316 req-936bdceb-6c87-4423-b089-9f6299ce8cdf fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Received event network-vif-unplugged-603f706b-6b06-4ad2-b22b-b118c9d68755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.778 2 DEBUG oslo_concurrency.lockutils [req-bb253aba-b6fc-4ac0-8545-4e8fd4610316 req-936bdceb-6c87-4423-b089-9f6299ce8cdf fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.780 2 DEBUG oslo_concurrency.lockutils [req-bb253aba-b6fc-4ac0-8545-4e8fd4610316 req-936bdceb-6c87-4423-b089-9f6299ce8cdf fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.780 2 DEBUG oslo_concurrency.lockutils [req-bb253aba-b6fc-4ac0-8545-4e8fd4610316 req-936bdceb-6c87-4423-b089-9f6299ce8cdf fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.781 2 DEBUG nova.compute.manager [req-bb253aba-b6fc-4ac0-8545-4e8fd4610316 req-936bdceb-6c87-4423-b089-9f6299ce8cdf fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] No waiting events found dispatching network-vif-unplugged-603f706b-6b06-4ad2-b22b-b118c9d68755 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.781 2 DEBUG nova.compute.manager [req-bb253aba-b6fc-4ac0-8545-4e8fd4610316 req-936bdceb-6c87-4423-b089-9f6299ce8cdf fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Received event network-vif-unplugged-603f706b-6b06-4ad2-b22b-b118c9d68755 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.781 2 DEBUG nova.compute.manager [req-bb253aba-b6fc-4ac0-8545-4e8fd4610316 req-936bdceb-6c87-4423-b089-9f6299ce8cdf fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Received event network-vif-plugged-603f706b-6b06-4ad2-b22b-b118c9d68755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.782 2 DEBUG oslo_concurrency.lockutils [req-bb253aba-b6fc-4ac0-8545-4e8fd4610316 req-936bdceb-6c87-4423-b089-9f6299ce8cdf fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.782 2 DEBUG oslo_concurrency.lockutils [req-bb253aba-b6fc-4ac0-8545-4e8fd4610316 req-936bdceb-6c87-4423-b089-9f6299ce8cdf fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.782 2 DEBUG oslo_concurrency.lockutils [req-bb253aba-b6fc-4ac0-8545-4e8fd4610316 req-936bdceb-6c87-4423-b089-9f6299ce8cdf fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.782 2 DEBUG nova.compute.manager [req-bb253aba-b6fc-4ac0-8545-4e8fd4610316 req-936bdceb-6c87-4423-b089-9f6299ce8cdf fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] No waiting events found dispatching network-vif-plugged-603f706b-6b06-4ad2-b22b-b118c9d68755 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.783 2 WARNING nova.compute.manager [req-bb253aba-b6fc-4ac0-8545-4e8fd4610316 req-936bdceb-6c87-4423-b089-9f6299ce8cdf fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Received unexpected event network-vif-plugged-603f706b-6b06-4ad2-b22b-b118c9d68755 for instance with vm_state active and task_state deleting.
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.792 2 DEBUG nova.compute.manager [req-0556c508-cc5d-40c1-84f6-2463ff06de5c req-7ea062b5-73a0-4fb2-b869-14470729c24f fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Received event network-vif-deleted-603f706b-6b06-4ad2-b22b-b118c9d68755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.805 2 DEBUG oslo_concurrency.lockutils [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.805 2 DEBUG oslo_concurrency.lockutils [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.913 2 DEBUG nova.compute.provider_tree [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.928 2 DEBUG nova.scheduler.client.report [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.944 2 DEBUG oslo_concurrency.lockutils [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:56 compute-0 nova_compute[194781]: 2025-10-02 19:46:56.965 2 INFO nova.scheduler.client.report [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Deleted allocations for instance 77c85795-42d5-4ba9-bbb5-b7009b5f992f
Oct 02 19:46:57 compute-0 nova_compute[194781]: 2025-10-02 19:46:57.020 2 DEBUG oslo_concurrency.lockutils [None req-a4cabf04-7a5d-4ca5-955d-9ce1a161d891 4ebdfb48323c4124b435387dfed92c5e 4ed4915cd456424c8ac561ce0da33795 - - default default] Lock "77c85795-42d5-4ba9-bbb5-b7009b5f992f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:46:58 compute-0 podman[262245]: 2025-10-02 19:46:58.734083538 +0000 UTC m=+0.108425182 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:46:58 compute-0 podman[262246]: 2025-10-02 19:46:58.803272063 +0000 UTC m=+0.166605883 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 19:46:59 compute-0 podman[209015]: time="2025-10-02T19:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:46:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:46:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5697 "" "Go-http-client/1.1"
Oct 02 19:47:00 compute-0 nova_compute[194781]: 2025-10-02 19:47:00.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:01 compute-0 nova_compute[194781]: 2025-10-02 19:47:01.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:01 compute-0 openstack_network_exporter[211160]: ERROR   19:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:47:01 compute-0 openstack_network_exporter[211160]: ERROR   19:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:47:01 compute-0 openstack_network_exporter[211160]: ERROR   19:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:47:01 compute-0 openstack_network_exporter[211160]: ERROR   19:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:47:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:47:01 compute-0 openstack_network_exporter[211160]: ERROR   19:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:47:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:47:02 compute-0 ovn_controller[97052]: 2025-10-02T19:47:02Z|00188|binding|INFO|Releasing lport aaa6ea3c-0164-44d4-b435-0c6c04e73e3f from this chassis (sb_readonly=0)
Oct 02 19:47:02 compute-0 ovn_controller[97052]: 2025-10-02T19:47:02Z|00189|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:47:02 compute-0 nova_compute[194781]: 2025-10-02 19:47:02.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:02 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:47:02.947 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:47:04 compute-0 podman[262287]: 2025-10-02 19:47:04.766347823 +0000 UTC m=+0.129366921 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:47:05 compute-0 nova_compute[194781]: 2025-10-02 19:47:05.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:06 compute-0 nova_compute[194781]: 2025-10-02 19:47:06.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:06 compute-0 nova_compute[194781]: 2025-10-02 19:47:06.973 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759434411.9717581, 573d9025-53e1-4cfe-b8ab-d19f024da535 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:47:06 compute-0 nova_compute[194781]: 2025-10-02 19:47:06.974 2 INFO nova.compute.manager [-] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] VM Stopped (Lifecycle Event)
Oct 02 19:47:06 compute-0 nova_compute[194781]: 2025-10-02 19:47:06.991 2 DEBUG nova.compute.manager [None req-97881ebb-13c1-48e8-a4f4-0701eb653fd5 - - - - - -] [instance: 573d9025-53e1-4cfe-b8ab-d19f024da535] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:47:10 compute-0 nova_compute[194781]: 2025-10-02 19:47:10.817 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759434415.8149939, 77c85795-42d5-4ba9-bbb5-b7009b5f992f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:47:10 compute-0 nova_compute[194781]: 2025-10-02 19:47:10.818 2 INFO nova.compute.manager [-] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] VM Stopped (Lifecycle Event)
Oct 02 19:47:10 compute-0 nova_compute[194781]: 2025-10-02 19:47:10.843 2 DEBUG nova.compute.manager [None req-3f3304c2-84cb-4bee-aa01-fdbe7b8fc4d8 - - - - - -] [instance: 77c85795-42d5-4ba9-bbb5-b7009b5f992f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:47:10 compute-0 nova_compute[194781]: 2025-10-02 19:47:10.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:11 compute-0 nova_compute[194781]: 2025-10-02 19:47:11.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:12 compute-0 nova_compute[194781]: 2025-10-02 19:47:12.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.947 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.948 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.956 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f0ac40ea-f3c9-4981-ba99-bfbf34bd253a', 'name': 'te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b43dc593-d176-449d-a8d5-95d53b8e1b5e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3dae65399d7c47999282bff6664f6d16', 'user_id': '23b5415980f24bbbbfa331c702f6f7d9', 'hostId': '298cf1af4dee135a9d0b3050937724c6c926b466f9f6516cf98c662a', 'status': 'active', 'metadata': {'metering.server_group': 'd4713e41-6620-49a4-8665-1b2fbe664d9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.959 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.961 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance ead9703a-68cd-4f65-a0dd-296c0a357b90 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct 02 19:47:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:12.961 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/ead9703a-68cd-4f65-a0dd-296c0a357b90 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}7d00fd7b3129404772d7b3eeaef94222e4d12fdb730378deac028178d031ce80" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.700 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Thu, 02 Oct 2025 19:47:12 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-42ca6d45-8a59-4460-9126-5d7ba0e1dce6 x-openstack-request-id: req-42ca6d45-8a59-4460-9126-5d7ba0e1dce6 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.700 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "ead9703a-68cd-4f65-a0dd-296c0a357b90", "name": "te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta", "status": "ACTIVE", "tenant_id": "3dae65399d7c47999282bff6664f6d16", "user_id": "23b5415980f24bbbbfa331c702f6f7d9", "metadata": {"metering.server_group": "d4713e41-6620-49a4-8665-1b2fbe664d9c"}, "hostId": "298cf1af4dee135a9d0b3050937724c6c926b466f9f6516cf98c662a", "image": {"id": "b43dc593-d176-449d-a8d5-95d53b8e1b5e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/b43dc593-d176-449d-a8d5-95d53b8e1b5e"}]}, "flavor": {"id": "7ab5ea96-81dd-4496-8a1f-012f7d2c53c5", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/7ab5ea96-81dd-4496-8a1f-012f7d2c53c5"}]}, "created": "2025-10-02T19:45:40Z", "updated": "2025-10-02T19:45:51Z", "addresses": {"": [{"version": 4, "addr": "10.100.0.62", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c7:57:cd"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/ead9703a-68cd-4f65-a0dd-296c0a357b90"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/ead9703a-68cd-4f65-a0dd-296c0a357b90"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-02T19:45:51.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.700 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/ead9703a-68cd-4f65-a0dd-296c0a357b90 used request id req-42ca6d45-8a59-4460-9126-5d7ba0e1dce6 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.702 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ead9703a-68cd-4f65-a0dd-296c0a357b90', 'name': 'te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b43dc593-d176-449d-a8d5-95d53b8e1b5e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3dae65399d7c47999282bff6664f6d16', 'user_id': '23b5415980f24bbbbfa331c702f6f7d9', 'hostId': '298cf1af4dee135a9d0b3050937724c6c926b466f9f6516cf98c662a', 'status': 'active', 'metadata': {'metering.server_group': 'd4713e41-6620-49a4-8665-1b2fbe664d9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.703 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.703 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.703 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.704 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:47:13.703611) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.728 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/cpu volume: 229420000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.750 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 55550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.780 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/cpu volume: 80120000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.780 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.780 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.780 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.781 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.781 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.781 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/memory.usage volume: 43.63671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.781 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.781 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/memory.usage volume: 47.80859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.782 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.782 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:47:13.781143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.782 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.782 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.782 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.782 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:47:13.782673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.787 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.791 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.795 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for ead9703a-68cd-4f65-a0dd-296c0a357b90 / tap722eab1f-2c inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.796 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.796 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.796 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.796 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.796 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.796 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.796 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.bytes volume: 1820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.797 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:47:13.796751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.797 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.797 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.bytes volume: 1646 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.797 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.797 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.797 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.798 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.798 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.798 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.798 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:47:13.798102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.798 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.798 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.799 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.799 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.799 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.799 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.799 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.799 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.799 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:47:13.799409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.799 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.800 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.800 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.800 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.800 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.800 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.800 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.800 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.801 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:47:13.800702) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.801 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.801 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.801 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.801 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.801 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.801 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.802 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.802 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.802 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-02T19:47:13.802035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.802 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta>]
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.802 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.802 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.802 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.802 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.803 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:47:13.802881) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.829 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.830 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.891 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.892 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.893 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.931 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.bytes volume: 29445120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.932 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.933 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.933 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.933 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.933 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.934 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.934 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.934 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.935 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.935 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.936 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.936 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:47:13.934158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.937 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.937 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.937 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.937 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.938 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.938 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:47:13.937688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.938 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.939 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.940 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.940 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.940 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.941 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.941 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.942 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:47:13.941836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.957 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.958 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.985 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.986 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:13.986 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.004 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.005 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.005 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.006 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.006 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.006 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.007 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.latency volume: 1069571389 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.007 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.latency volume: 104981662 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:47:14.006762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.007 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.008 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.008 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.008 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.latency volume: 899748976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.009 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.latency volume: 144179756 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.010 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.010 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-02T19:47:14.010609) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.011 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta>]
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.011 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.012 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.012 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.012 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.013 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.013 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.013 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.requests volume: 1058 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:47:14.011841) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.014 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.015 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.016 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:47:14.015655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.016 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.016 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.017 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.018 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.018 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.018 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.019 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.019 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.019 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.020 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.021 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:47:14.017967) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:47:14.022219) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.022 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.022 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.023 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.023 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.023 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.023 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.bytes volume: 72822784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.024 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.025 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.025 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.025 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.latency volume: 5202028856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.025 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.026 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.026 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:47:14.025487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.027 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.027 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.latency volume: 3316225742 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.027 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.028 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.028 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.028 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.029 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:47:14.029084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.029 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.030 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.030 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.030 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.030 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.031 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.031 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.032 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.032 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.032 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.032 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.032 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.requests volume: 327 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.033 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.033 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.033 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.034 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.034 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.requests volume: 312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.034 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.035 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:47:14.032486) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.036 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.037 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.037 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.038 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.039 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:47:14.036564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:47:14.039114) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.040 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.040 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.041 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:47:14.040763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.042 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.043 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:47:14.042571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.043 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.044 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.044 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.045 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.045 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:47:14.044675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.046 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.046 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:47:14.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:47:14 compute-0 nova_compute[194781]: 2025-10-02 19:47:14.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:15 compute-0 podman[262312]: 2025-10-02 19:47:15.744562198 +0000 UTC m=+0.103834430 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:47:15 compute-0 podman[262313]: 2025-10-02 19:47:15.767878569 +0000 UTC m=+0.120167755 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:47:15 compute-0 nova_compute[194781]: 2025-10-02 19:47:15.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:16 compute-0 nova_compute[194781]: 2025-10-02 19:47:16.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:17 compute-0 podman[262349]: 2025-10-02 19:47:17.745317206 +0000 UTC m=+0.112345457 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, com.redhat.component=ubi9-container, release=1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, container_name=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git)
Oct 02 19:47:17 compute-0 podman[262348]: 2025-10-02 19:47:17.749271391 +0000 UTC m=+0.118384628 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.openshift.expose-services=, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:47:17 compute-0 podman[262350]: 2025-10-02 19:47:17.749476867 +0000 UTC m=+0.115433550 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 19:47:20 compute-0 nova_compute[194781]: 2025-10-02 19:47:20.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:21 compute-0 nova_compute[194781]: 2025-10-02 19:47:21.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:23 compute-0 ovn_controller[97052]: 2025-10-02T19:47:23Z|00190|binding|INFO|Releasing lport aaa6ea3c-0164-44d4-b435-0c6c04e73e3f from this chassis (sb_readonly=0)
Oct 02 19:47:23 compute-0 ovn_controller[97052]: 2025-10-02T19:47:23Z|00191|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:47:23 compute-0 nova_compute[194781]: 2025-10-02 19:47:23.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:25 compute-0 nova_compute[194781]: 2025-10-02 19:47:25.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:26 compute-0 nova_compute[194781]: 2025-10-02 19:47:26.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:26 compute-0 podman[262414]: 2025-10-02 19:47:26.719775842 +0000 UTC m=+0.091789188 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:47:26 compute-0 podman[262415]: 2025-10-02 19:47:26.750271956 +0000 UTC m=+0.117524705 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:47:29 compute-0 podman[262452]: 2025-10-02 19:47:29.743819285 +0000 UTC m=+0.112859471 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 19:47:29 compute-0 podman[209015]: time="2025-10-02T19:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:47:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:47:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5688 "" "Go-http-client/1.1"
Oct 02 19:47:29 compute-0 podman[262453]: 2025-10-02 19:47:29.765372019 +0000 UTC m=+0.130437509 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:47:30 compute-0 nova_compute[194781]: 2025-10-02 19:47:30.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:31 compute-0 nova_compute[194781]: 2025-10-02 19:47:31.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:31 compute-0 openstack_network_exporter[211160]: ERROR   19:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:47:31 compute-0 openstack_network_exporter[211160]: ERROR   19:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:47:31 compute-0 openstack_network_exporter[211160]: ERROR   19:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:47:31 compute-0 openstack_network_exporter[211160]: ERROR   19:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:47:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:47:31 compute-0 openstack_network_exporter[211160]: ERROR   19:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:47:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:47:35 compute-0 nova_compute[194781]: 2025-10-02 19:47:35.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:35 compute-0 podman[262496]: 2025-10-02 19:47:35.754953526 +0000 UTC m=+0.118683105 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:47:35 compute-0 nova_compute[194781]: 2025-10-02 19:47:35.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:36 compute-0 nova_compute[194781]: 2025-10-02 19:47:36.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:36 compute-0 nova_compute[194781]: 2025-10-02 19:47:36.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:39 compute-0 nova_compute[194781]: 2025-10-02 19:47:39.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:39 compute-0 nova_compute[194781]: 2025-10-02 19:47:39.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:47:40 compute-0 nova_compute[194781]: 2025-10-02 19:47:40.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.081 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.083 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.084 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.085 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.220 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.320 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.322 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.389 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.402 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.467 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.469 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.570 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.572 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.668 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.670 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.732 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.741 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.803 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.804 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:47:41 compute-0 nova_compute[194781]: 2025-10-02 19:47:41.884 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:47:42 compute-0 nova_compute[194781]: 2025-10-02 19:47:42.345 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:47:42 compute-0 nova_compute[194781]: 2025-10-02 19:47:42.347 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4739MB free_disk=72.34603118896484GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:47:42 compute-0 nova_compute[194781]: 2025-10-02 19:47:42.347 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:47:42 compute-0 nova_compute[194781]: 2025-10-02 19:47:42.348 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:47:42 compute-0 nova_compute[194781]: 2025-10-02 19:47:42.446 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:47:42 compute-0 nova_compute[194781]: 2025-10-02 19:47:42.446 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:47:42 compute-0 nova_compute[194781]: 2025-10-02 19:47:42.447 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance ead9703a-68cd-4f65-a0dd-296c0a357b90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:47:42 compute-0 nova_compute[194781]: 2025-10-02 19:47:42.447 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:47:42 compute-0 nova_compute[194781]: 2025-10-02 19:47:42.448 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:47:42 compute-0 nova_compute[194781]: 2025-10-02 19:47:42.530 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:47:42 compute-0 nova_compute[194781]: 2025-10-02 19:47:42.552 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:47:42 compute-0 nova_compute[194781]: 2025-10-02 19:47:42.572 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:47:42 compute-0 nova_compute[194781]: 2025-10-02 19:47:42.573 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:47:45 compute-0 nova_compute[194781]: 2025-10-02 19:47:45.569 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:45 compute-0 nova_compute[194781]: 2025-10-02 19:47:45.571 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:45 compute-0 nova_compute[194781]: 2025-10-02 19:47:45.572 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:45 compute-0 nova_compute[194781]: 2025-10-02 19:47:45.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:46 compute-0 nova_compute[194781]: 2025-10-02 19:47:46.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:46 compute-0 podman[262547]: 2025-10-02 19:47:46.723852743 +0000 UTC m=+0.091325886 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 19:47:46 compute-0 podman[262546]: 2025-10-02 19:47:46.728731123 +0000 UTC m=+0.088674676 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Oct 02 19:47:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:47:47.494 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:47:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:47:47.495 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:47:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:47:47.495 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:47:48 compute-0 podman[262582]: 2025-10-02 19:47:48.712520239 +0000 UTC m=+0.083655771 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41)
Oct 02 19:47:48 compute-0 podman[262584]: 2025-10-02 19:47:48.727610452 +0000 UTC m=+0.085640365 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 19:47:48 compute-0 podman[262583]: 2025-10-02 19:47:48.756449501 +0000 UTC m=+0.120120174 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, version=9.4, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.component=ubi9-container)
Oct 02 19:47:50 compute-0 nova_compute[194781]: 2025-10-02 19:47:50.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:51 compute-0 nova_compute[194781]: 2025-10-02 19:47:51.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:47:51 compute-0 nova_compute[194781]: 2025-10-02 19:47:51.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:47:51 compute-0 nova_compute[194781]: 2025-10-02 19:47:51.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:47:51 compute-0 nova_compute[194781]: 2025-10-02 19:47:51.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:51 compute-0 nova_compute[194781]: 2025-10-02 19:47:51.298 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:47:51 compute-0 nova_compute[194781]: 2025-10-02 19:47:51.298 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:47:51 compute-0 nova_compute[194781]: 2025-10-02 19:47:51.299 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:47:51 compute-0 nova_compute[194781]: 2025-10-02 19:47:51.299 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:47:52 compute-0 nova_compute[194781]: 2025-10-02 19:47:52.683 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:47:52 compute-0 nova_compute[194781]: 2025-10-02 19:47:52.697 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:47:52 compute-0 nova_compute[194781]: 2025-10-02 19:47:52.698 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:47:55 compute-0 ovn_controller[97052]: 2025-10-02T19:47:55Z|00192|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Oct 02 19:47:55 compute-0 nova_compute[194781]: 2025-10-02 19:47:55.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:56 compute-0 nova_compute[194781]: 2025-10-02 19:47:56.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:47:57 compute-0 podman[262639]: 2025-10-02 19:47:57.735342725 +0000 UTC m=+0.105055032 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 19:47:57 compute-0 podman[262638]: 2025-10-02 19:47:57.749827661 +0000 UTC m=+0.113267341 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:47:59 compute-0 podman[209015]: time="2025-10-02T19:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:47:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:47:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5685 "" "Go-http-client/1.1"
Oct 02 19:48:00 compute-0 podman[262678]: 2025-10-02 19:48:00.719155286 +0000 UTC m=+0.091558113 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 19:48:00 compute-0 podman[262679]: 2025-10-02 19:48:00.735380768 +0000 UTC m=+0.106122741 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:48:00 compute-0 nova_compute[194781]: 2025-10-02 19:48:00.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:01 compute-0 nova_compute[194781]: 2025-10-02 19:48:01.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:01 compute-0 openstack_network_exporter[211160]: ERROR   19:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:48:01 compute-0 openstack_network_exporter[211160]: ERROR   19:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:48:01 compute-0 openstack_network_exporter[211160]: ERROR   19:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:48:01 compute-0 openstack_network_exporter[211160]: ERROR   19:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:48:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:48:01 compute-0 openstack_network_exporter[211160]: ERROR   19:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:48:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:48:05 compute-0 nova_compute[194781]: 2025-10-02 19:48:05.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:06 compute-0 nova_compute[194781]: 2025-10-02 19:48:06.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:06 compute-0 podman[262721]: 2025-10-02 19:48:06.692444229 +0000 UTC m=+0.068073596 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:48:10 compute-0 nova_compute[194781]: 2025-10-02 19:48:10.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:11 compute-0 nova_compute[194781]: 2025-10-02 19:48:11.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:14 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 19:48:15 compute-0 nova_compute[194781]: 2025-10-02 19:48:15.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:16 compute-0 nova_compute[194781]: 2025-10-02 19:48:16.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:17 compute-0 podman[262744]: 2025-10-02 19:48:17.765834581 +0000 UTC m=+0.124470669 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true)
Oct 02 19:48:17 compute-0 podman[262745]: 2025-10-02 19:48:17.787292804 +0000 UTC m=+0.145238874 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930)
Oct 02 19:48:19 compute-0 podman[262782]: 2025-10-02 19:48:19.706861117 +0000 UTC m=+0.071575740 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct 02 19:48:19 compute-0 podman[262780]: 2025-10-02 19:48:19.733432015 +0000 UTC m=+0.093168455 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, config_id=edpm, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter)
Oct 02 19:48:19 compute-0 podman[262781]: 2025-10-02 19:48:19.76399705 +0000 UTC m=+0.122850077 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, release-0.7.12=, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543)
Oct 02 19:48:20 compute-0 nova_compute[194781]: 2025-10-02 19:48:20.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:21 compute-0 nova_compute[194781]: 2025-10-02 19:48:21.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:25 compute-0 nova_compute[194781]: 2025-10-02 19:48:25.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:26 compute-0 nova_compute[194781]: 2025-10-02 19:48:26.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:28 compute-0 podman[262839]: 2025-10-02 19:48:28.693680202 +0000 UTC m=+0.069971927 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 19:48:28 compute-0 podman[262838]: 2025-10-02 19:48:28.715300028 +0000 UTC m=+0.084637567 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:48:29 compute-0 podman[209015]: time="2025-10-02T19:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:48:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:48:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5689 "" "Go-http-client/1.1"
Oct 02 19:48:30 compute-0 nova_compute[194781]: 2025-10-02 19:48:30.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:31 compute-0 nova_compute[194781]: 2025-10-02 19:48:31.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:31 compute-0 openstack_network_exporter[211160]: ERROR   19:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:48:31 compute-0 openstack_network_exporter[211160]: ERROR   19:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:48:31 compute-0 openstack_network_exporter[211160]: ERROR   19:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:48:31 compute-0 openstack_network_exporter[211160]: ERROR   19:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:48:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:48:31 compute-0 openstack_network_exporter[211160]: ERROR   19:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:48:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:48:31 compute-0 podman[262879]: 2025-10-02 19:48:31.775743943 +0000 UTC m=+0.138300508 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 19:48:31 compute-0 podman[262880]: 2025-10-02 19:48:31.794080452 +0000 UTC m=+0.156549595 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:48:35 compute-0 nova_compute[194781]: 2025-10-02 19:48:35.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:36 compute-0 nova_compute[194781]: 2025-10-02 19:48:36.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:36 compute-0 nova_compute[194781]: 2025-10-02 19:48:36.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:37 compute-0 nova_compute[194781]: 2025-10-02 19:48:37.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:37 compute-0 podman[262918]: 2025-10-02 19:48:37.710500248 +0000 UTC m=+0.091440269 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:48:40 compute-0 nova_compute[194781]: 2025-10-02 19:48:40.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:40 compute-0 nova_compute[194781]: 2025-10-02 19:48:40.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:48:40 compute-0 nova_compute[194781]: 2025-10-02 19:48:40.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.068 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.069 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.069 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.070 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.174 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.269 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.282 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.344 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.358 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.435 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.436 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.499 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.501 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.569 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.572 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.641 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.658 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.729 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.730 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:48:41 compute-0 nova_compute[194781]: 2025-10-02 19:48:41.827 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:48:42 compute-0 nova_compute[194781]: 2025-10-02 19:48:42.362 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:48:42 compute-0 nova_compute[194781]: 2025-10-02 19:48:42.364 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4753MB free_disk=72.34435272216797GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:48:42 compute-0 nova_compute[194781]: 2025-10-02 19:48:42.364 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:48:42 compute-0 nova_compute[194781]: 2025-10-02 19:48:42.365 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:48:42 compute-0 nova_compute[194781]: 2025-10-02 19:48:42.444 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:48:42 compute-0 nova_compute[194781]: 2025-10-02 19:48:42.444 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:48:42 compute-0 nova_compute[194781]: 2025-10-02 19:48:42.444 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance ead9703a-68cd-4f65-a0dd-296c0a357b90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:48:42 compute-0 nova_compute[194781]: 2025-10-02 19:48:42.445 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:48:42 compute-0 nova_compute[194781]: 2025-10-02 19:48:42.445 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:48:42 compute-0 nova_compute[194781]: 2025-10-02 19:48:42.517 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:48:42 compute-0 nova_compute[194781]: 2025-10-02 19:48:42.532 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:48:42 compute-0 nova_compute[194781]: 2025-10-02 19:48:42.535 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:48:42 compute-0 nova_compute[194781]: 2025-10-02 19:48:42.535 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:48:43 compute-0 nova_compute[194781]: 2025-10-02 19:48:43.536 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:44 compute-0 nova_compute[194781]: 2025-10-02 19:48:44.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:45 compute-0 nova_compute[194781]: 2025-10-02 19:48:45.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:45 compute-0 nova_compute[194781]: 2025-10-02 19:48:45.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:45 compute-0 nova_compute[194781]: 2025-10-02 19:48:45.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:46 compute-0 nova_compute[194781]: 2025-10-02 19:48:46.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:48:47.496 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:48:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:48:47.497 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:48:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:48:47.499 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:48:48 compute-0 podman[262968]: 2025-10-02 19:48:48.774705424 +0000 UTC m=+0.134551089 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Oct 02 19:48:48 compute-0 podman[262967]: 2025-10-02 19:48:48.778390422 +0000 UTC m=+0.135292838 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:48:50 compute-0 podman[263006]: 2025-10-02 19:48:50.741019973 +0000 UTC m=+0.100388337 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Oct 02 19:48:50 compute-0 podman[263008]: 2025-10-02 19:48:50.758308444 +0000 UTC m=+0.113011994 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 02 19:48:50 compute-0 podman[263007]: 2025-10-02 19:48:50.76452043 +0000 UTC m=+0.122233030 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, name=ubi9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct 02 19:48:50 compute-0 nova_compute[194781]: 2025-10-02 19:48:50.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:51 compute-0 nova_compute[194781]: 2025-10-02 19:48:51.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:52 compute-0 nova_compute[194781]: 2025-10-02 19:48:52.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:52 compute-0 nova_compute[194781]: 2025-10-02 19:48:52.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:48:52 compute-0 nova_compute[194781]: 2025-10-02 19:48:52.361 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:48:52 compute-0 nova_compute[194781]: 2025-10-02 19:48:52.362 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:48:52 compute-0 nova_compute[194781]: 2025-10-02 19:48:52.363 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:48:53 compute-0 nova_compute[194781]: 2025-10-02 19:48:53.493 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Updating instance_info_cache with network_info: [{"id": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "address": "fa:16:3e:e2:c6:bd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45b53db0-b1", "ovs_interfaceid": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:48:53 compute-0 nova_compute[194781]: 2025-10-02 19:48:53.519 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:48:53 compute-0 nova_compute[194781]: 2025-10-02 19:48:53.520 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:48:55 compute-0 nova_compute[194781]: 2025-10-02 19:48:55.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:56 compute-0 nova_compute[194781]: 2025-10-02 19:48:56.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:48:58 compute-0 nova_compute[194781]: 2025-10-02 19:48:58.517 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:48:59 compute-0 podman[263062]: 2025-10-02 19:48:59.728118007 +0000 UTC m=+0.093543125 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:48:59 compute-0 podman[263063]: 2025-10-02 19:48:59.745590393 +0000 UTC m=+0.112428889 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:48:59 compute-0 podman[209015]: time="2025-10-02T19:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:48:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:48:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5692 "" "Go-http-client/1.1"
Oct 02 19:49:00 compute-0 nova_compute[194781]: 2025-10-02 19:49:00.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:01 compute-0 nova_compute[194781]: 2025-10-02 19:49:01.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:01 compute-0 openstack_network_exporter[211160]: ERROR   19:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:49:01 compute-0 openstack_network_exporter[211160]: ERROR   19:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:49:01 compute-0 openstack_network_exporter[211160]: ERROR   19:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:49:01 compute-0 openstack_network_exporter[211160]: ERROR   19:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:49:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:49:01 compute-0 openstack_network_exporter[211160]: ERROR   19:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:49:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:49:02 compute-0 podman[263103]: 2025-10-02 19:49:02.729704882 +0000 UTC m=+0.104639431 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:49:02 compute-0 podman[263104]: 2025-10-02 19:49:02.811277287 +0000 UTC m=+0.168223387 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 19:49:05 compute-0 nova_compute[194781]: 2025-10-02 19:49:05.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:06 compute-0 nova_compute[194781]: 2025-10-02 19:49:06.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:07 compute-0 unix_chkpwd[263151]: password check failed for user (root)
Oct 02 19:49:07 compute-0 sshd-session[263149]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.10.225  user=root
Oct 02 19:49:08 compute-0 podman[263152]: 2025-10-02 19:49:08.773574346 +0000 UTC m=+0.124152181 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:49:09 compute-0 sshd-session[263149]: Failed password for root from 141.98.10.225 port 13554 ssh2
Oct 02 19:49:10 compute-0 unix_chkpwd[263175]: password check failed for user (root)
Oct 02 19:49:10 compute-0 nova_compute[194781]: 2025-10-02 19:49:10.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:11 compute-0 nova_compute[194781]: 2025-10-02 19:49:11.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:12 compute-0 sshd-session[263149]: Failed password for root from 141.98.10.225 port 13554 ssh2
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.948 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.949 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba74cad80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.959 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f0ac40ea-f3c9-4981-ba99-bfbf34bd253a', 'name': 'te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b43dc593-d176-449d-a8d5-95d53b8e1b5e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3dae65399d7c47999282bff6664f6d16', 'user_id': '23b5415980f24bbbbfa331c702f6f7d9', 'hostId': '298cf1af4dee135a9d0b3050937724c6c926b466f9f6516cf98c662a', 'status': 'active', 'metadata': {'metering.server_group': 'd4713e41-6620-49a4-8665-1b2fbe664d9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.965 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.971 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ead9703a-68cd-4f65-a0dd-296c0a357b90', 'name': 'te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b43dc593-d176-449d-a8d5-95d53b8e1b5e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3dae65399d7c47999282bff6664f6d16', 'user_id': '23b5415980f24bbbbfa331c702f6f7d9', 'hostId': '298cf1af4dee135a9d0b3050937724c6c926b466f9f6516cf98c662a', 'status': 'active', 'metadata': {'metering.server_group': 'd4713e41-6620-49a4-8665-1b2fbe664d9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.971 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.971 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.971 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.972 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:12.973 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:49:12.972079) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.011 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/cpu volume: 330620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.056 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 57230000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.113 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/cpu volume: 198930000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.114 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.115 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.115 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.115 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.116 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/memory.usage volume: 43.171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.116 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.117 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/memory.usage volume: 46.99609375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:49:13.115732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.118 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.118 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:49:13.119076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.125 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.132 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.138 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.139 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.139 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.139 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.140 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.140 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.140 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.140 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:49:13.140451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.140 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.bytes volume: 1820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.141 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.141 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.bytes volume: 2276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.142 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.142 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.143 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.143 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.143 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.143 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:49:13.143519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.144 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.144 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.144 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.145 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.145 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.146 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.146 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:49:13.146659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.147 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.147 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.148 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.148 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.149 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.149 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.149 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.149 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.149 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.150 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:49:13.149773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.150 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.150 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.151 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.152 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.152 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.152 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.152 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.153 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.153 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.153 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.153 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:49:13.153554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.205 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.205 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.312 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.312 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.313 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.366 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.bytes volume: 29445120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.367 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.368 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.368 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.369 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.369 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.369 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.369 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:49:13.369363) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.370 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.370 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.371 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.371 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.372 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.372 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.372 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:49:13.372466) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.372 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.373 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.373 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.374 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.374 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.375 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.375 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.375 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:49:13.375452) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.402 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.403 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.439 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.439 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.440 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.463 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.464 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.466 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.466 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.466 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.467 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:49:13.467089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.467 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.latency volume: 1101066582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.468 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.latency volume: 115063820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.469 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.469 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.470 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.470 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.latency volume: 899748976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.471 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.latency volume: 144179756 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.472 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.472 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.472 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.472 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.472 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.472 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.473 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.473 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.473 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.474 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.474 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.475 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.475 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.requests volume: 1058 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.475 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.476 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.476 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 unix_chkpwd[263177]: password check failed for user (root)
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.477 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.477 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.477 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.477 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.478 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.478 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.479 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:49:13.473067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:49:13.477888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.481 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:49:13.482830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.483 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.484 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.484 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.485 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.486 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.486 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.487 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.489 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.490 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:49:13.490100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.490 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.bytes volume: 72986624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.491 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.492 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.492 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.493 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.494 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.bytes volume: 72871936 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.494 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.495 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.495 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.496 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.496 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.496 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.496 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.496 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.latency volume: 5210464291 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.497 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.497 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.498 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.498 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.498 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.latency volume: 3335552889 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.499 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.500 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.500 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:49:13.496652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.500 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.501 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.501 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.501 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.502 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.502 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.502 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:49:13.500637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.503 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.504 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.requests volume: 331 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.504 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.505 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.505 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.505 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.506 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.requests volume: 320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.506 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.507 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.507 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:49:13.504245) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.508 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.508 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.508 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.509 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:49:13.508092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:49:13.510163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.511 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.511 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.511 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.511 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.512 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.512 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.512 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.513 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.513 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.513 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.513 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.514 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.514 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.514 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.514 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.515 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.515 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.515 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.516 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:49:13.511751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:49:13.513144) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:49:13.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:49:13.514942) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:49:15 compute-0 sshd-session[263149]: Failed password for root from 141.98.10.225 port 13554 ssh2
Oct 02 19:49:15 compute-0 nova_compute[194781]: 2025-10-02 19:49:15.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:16 compute-0 nova_compute[194781]: 2025-10-02 19:49:16.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:16 compute-0 sshd-session[263149]: Received disconnect from 141.98.10.225 port 13554:11:  [preauth]
Oct 02 19:49:16 compute-0 sshd-session[263149]: Disconnected from authenticating user root 141.98.10.225 port 13554 [preauth]
Oct 02 19:49:16 compute-0 sshd-session[263149]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.10.225  user=root
Oct 02 19:49:17 compute-0 unix_chkpwd[263181]: password check failed for user (root)
Oct 02 19:49:17 compute-0 sshd-session[263179]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.10.225  user=root
Oct 02 19:49:19 compute-0 podman[263184]: 2025-10-02 19:49:19.721447562 +0000 UTC m=+0.089399755 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:49:19 compute-0 podman[263183]: 2025-10-02 19:49:19.736076252 +0000 UTC m=+0.110823686 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 19:49:19 compute-0 sshd-session[263179]: Failed password for root from 141.98.10.225 port 53534 ssh2
Oct 02 19:49:20 compute-0 unix_chkpwd[263222]: password check failed for user (root)
Oct 02 19:49:20 compute-0 nova_compute[194781]: 2025-10-02 19:49:20.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:21 compute-0 nova_compute[194781]: 2025-10-02 19:49:21.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:21 compute-0 podman[263224]: 2025-10-02 19:49:21.753251118 +0000 UTC m=+0.109142461 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.tags=base rhel9, release-0.7.12=, architecture=x86_64, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct 02 19:49:21 compute-0 podman[263223]: 2025-10-02 19:49:21.762513775 +0000 UTC m=+0.123213876 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, distribution-scope=public)
Oct 02 19:49:21 compute-0 podman[263225]: 2025-10-02 19:49:21.767841997 +0000 UTC m=+0.122587449 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Oct 02 19:49:22 compute-0 sshd-session[263179]: Failed password for root from 141.98.10.225 port 53534 ssh2
Oct 02 19:49:23 compute-0 unix_chkpwd[263281]: password check failed for user (root)
Oct 02 19:49:25 compute-0 sshd-session[263179]: Failed password for root from 141.98.10.225 port 53534 ssh2
Oct 02 19:49:25 compute-0 nova_compute[194781]: 2025-10-02 19:49:25.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:26 compute-0 nova_compute[194781]: 2025-10-02 19:49:26.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:27 compute-0 sshd-session[263179]: Received disconnect from 141.98.10.225 port 53534:11:  [preauth]
Oct 02 19:49:27 compute-0 sshd-session[263179]: Disconnected from authenticating user root 141.98.10.225 port 53534 [preauth]
Oct 02 19:49:27 compute-0 sshd-session[263179]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.10.225  user=root
Oct 02 19:49:27 compute-0 unix_chkpwd[263284]: password check failed for user (root)
Oct 02 19:49:27 compute-0 sshd-session[263282]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.10.225  user=root
Oct 02 19:49:29 compute-0 sshd-session[263282]: Failed password for root from 141.98.10.225 port 46970 ssh2
Oct 02 19:49:29 compute-0 podman[209015]: time="2025-10-02T19:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:49:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:49:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5697 "" "Go-http-client/1.1"
Oct 02 19:49:30 compute-0 podman[263286]: 2025-10-02 19:49:30.766596091 +0000 UTC m=+0.121691616 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 19:49:30 compute-0 podman[263285]: 2025-10-02 19:49:30.775921869 +0000 UTC m=+0.139098680 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:49:30 compute-0 nova_compute[194781]: 2025-10-02 19:49:30.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:31 compute-0 unix_chkpwd[263327]: password check failed for user (root)
Oct 02 19:49:31 compute-0 nova_compute[194781]: 2025-10-02 19:49:31.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:31 compute-0 openstack_network_exporter[211160]: ERROR   19:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:49:31 compute-0 openstack_network_exporter[211160]: ERROR   19:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:49:31 compute-0 openstack_network_exporter[211160]: ERROR   19:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:49:31 compute-0 openstack_network_exporter[211160]: ERROR   19:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:49:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:49:31 compute-0 openstack_network_exporter[211160]: ERROR   19:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:49:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:49:32 compute-0 sshd-session[263282]: Failed password for root from 141.98.10.225 port 46970 ssh2
Oct 02 19:49:33 compute-0 podman[263328]: 2025-10-02 19:49:33.752767524 +0000 UTC m=+0.118279105 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 19:49:33 compute-0 podman[263329]: 2025-10-02 19:49:33.808987903 +0000 UTC m=+0.168406151 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:49:34 compute-0 unix_chkpwd[263372]: password check failed for user (root)
Oct 02 19:49:35 compute-0 sshd-session[263282]: Failed password for root from 141.98.10.225 port 46970 ssh2
Oct 02 19:49:35 compute-0 nova_compute[194781]: 2025-10-02 19:49:35.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:36 compute-0 nova_compute[194781]: 2025-10-02 19:49:36.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:37 compute-0 sshd-session[263282]: Received disconnect from 141.98.10.225 port 46970:11:  [preauth]
Oct 02 19:49:37 compute-0 sshd-session[263282]: Disconnected from authenticating user root 141.98.10.225 port 46970 [preauth]
Oct 02 19:49:37 compute-0 sshd-session[263282]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.10.225  user=root
Oct 02 19:49:38 compute-0 nova_compute[194781]: 2025-10-02 19:49:38.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:39 compute-0 nova_compute[194781]: 2025-10-02 19:49:39.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:39 compute-0 podman[263373]: 2025-10-02 19:49:39.735647663 +0000 UTC m=+0.094212353 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:49:40 compute-0 nova_compute[194781]: 2025-10-02 19:49:40.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:40 compute-0 nova_compute[194781]: 2025-10-02 19:49:40.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:49:41 compute-0 nova_compute[194781]: 2025-10-02 19:49:41.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:41 compute-0 nova_compute[194781]: 2025-10-02 19:49:41.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.032 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.068 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.069 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.070 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.070 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.204 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.304 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.305 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.402 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.410 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.505 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.506 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.604 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.608 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.677 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.680 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.749 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.766 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.836 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.838 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:49:43 compute-0 nova_compute[194781]: 2025-10-02 19:49:43.908 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:49:44 compute-0 nova_compute[194781]: 2025-10-02 19:49:44.385 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:49:44 compute-0 nova_compute[194781]: 2025-10-02 19:49:44.387 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4749MB free_disk=72.34424209594727GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:49:44 compute-0 nova_compute[194781]: 2025-10-02 19:49:44.387 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:49:44 compute-0 nova_compute[194781]: 2025-10-02 19:49:44.388 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:49:44 compute-0 nova_compute[194781]: 2025-10-02 19:49:44.470 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:49:44 compute-0 nova_compute[194781]: 2025-10-02 19:49:44.470 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:49:44 compute-0 nova_compute[194781]: 2025-10-02 19:49:44.470 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance ead9703a-68cd-4f65-a0dd-296c0a357b90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:49:44 compute-0 nova_compute[194781]: 2025-10-02 19:49:44.471 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:49:44 compute-0 nova_compute[194781]: 2025-10-02 19:49:44.471 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:49:44 compute-0 nova_compute[194781]: 2025-10-02 19:49:44.540 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:49:44 compute-0 nova_compute[194781]: 2025-10-02 19:49:44.562 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:49:44 compute-0 nova_compute[194781]: 2025-10-02 19:49:44.563 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:49:44 compute-0 nova_compute[194781]: 2025-10-02 19:49:44.564 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:49:45 compute-0 nova_compute[194781]: 2025-10-02 19:49:45.560 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:45 compute-0 nova_compute[194781]: 2025-10-02 19:49:45.561 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:46 compute-0 nova_compute[194781]: 2025-10-02 19:49:46.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:46 compute-0 nova_compute[194781]: 2025-10-02 19:49:46.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:46 compute-0 nova_compute[194781]: 2025-10-02 19:49:46.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:47 compute-0 nova_compute[194781]: 2025-10-02 19:49:47.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:49:47.498 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:49:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:49:47.498 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:49:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:49:47.499 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:49:50 compute-0 podman[263423]: 2025-10-02 19:49:50.754648765 +0000 UTC m=+0.128752134 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:49:50 compute-0 podman[263424]: 2025-10-02 19:49:50.758702944 +0000 UTC m=+0.118909792 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 19:49:51 compute-0 nova_compute[194781]: 2025-10-02 19:49:51.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:51 compute-0 nova_compute[194781]: 2025-10-02 19:49:51.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:52 compute-0 nova_compute[194781]: 2025-10-02 19:49:52.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:49:52 compute-0 nova_compute[194781]: 2025-10-02 19:49:52.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:49:52 compute-0 nova_compute[194781]: 2025-10-02 19:49:52.399 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-ead9703a-68cd-4f65-a0dd-296c0a357b90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:49:52 compute-0 nova_compute[194781]: 2025-10-02 19:49:52.399 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-ead9703a-68cd-4f65-a0dd-296c0a357b90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:49:52 compute-0 nova_compute[194781]: 2025-10-02 19:49:52.400 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:49:52 compute-0 podman[263462]: 2025-10-02 19:49:52.728489996 +0000 UTC m=+0.091718897 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.buildah.version=1.29.0, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Oct 02 19:49:52 compute-0 podman[263463]: 2025-10-02 19:49:52.744376149 +0000 UTC m=+0.103936272 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Oct 02 19:49:52 compute-0 podman[263461]: 2025-10-02 19:49:52.767833665 +0000 UTC m=+0.127420859 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, version=9.6, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 02 19:49:53 compute-0 nova_compute[194781]: 2025-10-02 19:49:53.436 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Updating instance_info_cache with network_info: [{"id": "722eab1f-2c73-4b59-9732-99ee52407450", "address": "fa:16:3e:c7:57:cd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722eab1f-2c", "ovs_interfaceid": "722eab1f-2c73-4b59-9732-99ee52407450", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:49:53 compute-0 nova_compute[194781]: 2025-10-02 19:49:53.477 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-ead9703a-68cd-4f65-a0dd-296c0a357b90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:49:53 compute-0 nova_compute[194781]: 2025-10-02 19:49:53.478 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:49:56 compute-0 nova_compute[194781]: 2025-10-02 19:49:56.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:56 compute-0 nova_compute[194781]: 2025-10-02 19:49:56.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:49:59 compute-0 podman[209015]: time="2025-10-02T19:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:49:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:49:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5690 "" "Go-http-client/1.1"
Oct 02 19:50:01 compute-0 nova_compute[194781]: 2025-10-02 19:50:01.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:01 compute-0 nova_compute[194781]: 2025-10-02 19:50:01.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:01 compute-0 openstack_network_exporter[211160]: ERROR   19:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:50:01 compute-0 openstack_network_exporter[211160]: ERROR   19:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:50:01 compute-0 openstack_network_exporter[211160]: ERROR   19:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:50:01 compute-0 openstack_network_exporter[211160]: ERROR   19:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:50:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:50:01 compute-0 openstack_network_exporter[211160]: ERROR   19:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:50:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:50:01 compute-0 podman[263519]: 2025-10-02 19:50:01.702650113 +0000 UTC m=+0.079382038 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:50:01 compute-0 podman[263520]: 2025-10-02 19:50:01.769010602 +0000 UTC m=+0.141299368 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.034 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.035 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.035 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.036 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.036 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.036 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.065 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.090 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.090 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Image id b43dc593-d176-449d-a8d5-95d53b8e1b5e yields fingerprint dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.090 2 INFO nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] image b43dc593-d176-449d-a8d5-95d53b8e1b5e at (/var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e): checking
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.091 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] image b43dc593-d176-449d-a8d5-95d53b8e1b5e at (/var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.093 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.093 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Image id 2c6780ee-8ca6-4dab-831c-c89907768547 yields fingerprint e2414b9b934482058b2047ac6d18f7f90fd5db4d _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.093 2 INFO nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] image 2c6780ee-8ca6-4dab-831c-c89907768547 at (/var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d): checking
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.094 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] image 2c6780ee-8ca6-4dab-831c-c89907768547 at (/var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.095 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.095 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.095 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.177 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.179 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 is backed by e2414b9b934482058b2047ac6d18f7f90fd5db4d _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.179 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.179 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.180 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.250 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.251 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a is backed by dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.251 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] ead9703a-68cd-4f65-a0dd-296c0a357b90 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.252 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] ead9703a-68cd-4f65-a0dd-296c0a357b90 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.252 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.344 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.345 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance ead9703a-68cd-4f65-a0dd-296c0a357b90 is backed by dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.346 2 WARNING nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Unknown base file: /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.346 2 WARNING nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.346 2 INFO nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Active base files: /var/lib/nova/instances/_base/dc5d2a506047b7ee33f0dabc7ca93978f8e03a4e /var/lib/nova/instances/_base/e2414b9b934482058b2047ac6d18f7f90fd5db4d
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.347 2 INFO nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Removable base files: /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9 /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.348 2 INFO nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/731e6f3a25a50045fefcd1e8c54cf1a5094696c9
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.348 2 INFO nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a9843d922d50b317c389e448cbaaf7849a9d0409
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.348 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.349 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 02 19:50:04 compute-0 nova_compute[194781]: 2025-10-02 19:50:04.349 2 DEBUG nova.virt.libvirt.imagecache [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 02 19:50:04 compute-0 podman[263572]: 2025-10-02 19:50:04.713207998 +0000 UTC m=+0.081152765 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:50:04 compute-0 podman[263573]: 2025-10-02 19:50:04.777328207 +0000 UTC m=+0.131853216 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:50:06 compute-0 nova_compute[194781]: 2025-10-02 19:50:06.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:06 compute-0 nova_compute[194781]: 2025-10-02 19:50:06.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:10 compute-0 podman[263615]: 2025-10-02 19:50:10.734103939 +0000 UTC m=+0.093585606 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:50:11 compute-0 nova_compute[194781]: 2025-10-02 19:50:11.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:11 compute-0 nova_compute[194781]: 2025-10-02 19:50:11.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:16 compute-0 nova_compute[194781]: 2025-10-02 19:50:16.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:16 compute-0 nova_compute[194781]: 2025-10-02 19:50:16.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:21 compute-0 nova_compute[194781]: 2025-10-02 19:50:21.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:21 compute-0 nova_compute[194781]: 2025-10-02 19:50:21.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:21 compute-0 podman[263639]: 2025-10-02 19:50:21.777675667 +0000 UTC m=+0.130738477 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 19:50:21 compute-0 podman[263638]: 2025-10-02 19:50:21.777778309 +0000 UTC m=+0.134353883 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:50:23 compute-0 podman[263676]: 2025-10-02 19:50:23.774625003 +0000 UTC m=+0.119548709 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, release-0.7.12=, container_name=kepler, vcs-type=git, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Oct 02 19:50:23 compute-0 podman[263675]: 2025-10-02 19:50:23.776307918 +0000 UTC m=+0.130265434 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.expose-services=)
Oct 02 19:50:23 compute-0 podman[263677]: 2025-10-02 19:50:23.809066752 +0000 UTC m=+0.148981024 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:50:26 compute-0 nova_compute[194781]: 2025-10-02 19:50:26.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:26 compute-0 nova_compute[194781]: 2025-10-02 19:50:26.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:29 compute-0 podman[209015]: time="2025-10-02T19:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:50:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:50:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5694 "" "Go-http-client/1.1"
Oct 02 19:50:31 compute-0 nova_compute[194781]: 2025-10-02 19:50:31.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:31 compute-0 nova_compute[194781]: 2025-10-02 19:50:31.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:31 compute-0 openstack_network_exporter[211160]: ERROR   19:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:50:31 compute-0 openstack_network_exporter[211160]: ERROR   19:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:50:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:50:31 compute-0 openstack_network_exporter[211160]: ERROR   19:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:50:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:50:31 compute-0 openstack_network_exporter[211160]: ERROR   19:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:50:31 compute-0 openstack_network_exporter[211160]: ERROR   19:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:50:32 compute-0 podman[263733]: 2025-10-02 19:50:32.730442083 +0000 UTC m=+0.092633261 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:50:32 compute-0 podman[263732]: 2025-10-02 19:50:32.757851014 +0000 UTC m=+0.120507745 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:50:35 compute-0 podman[263776]: 2025-10-02 19:50:35.76774573 +0000 UTC m=+0.128895048 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:50:35 compute-0 podman[263777]: 2025-10-02 19:50:35.843333385 +0000 UTC m=+0.198396471 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:50:36 compute-0 nova_compute[194781]: 2025-10-02 19:50:36.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:36 compute-0 nova_compute[194781]: 2025-10-02 19:50:36.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:38 compute-0 nova_compute[194781]: 2025-10-02 19:50:38.351 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:39 compute-0 nova_compute[194781]: 2025-10-02 19:50:39.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:40 compute-0 nova_compute[194781]: 2025-10-02 19:50:40.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:40 compute-0 nova_compute[194781]: 2025-10-02 19:50:40.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:50:41 compute-0 nova_compute[194781]: 2025-10-02 19:50:41.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:41 compute-0 nova_compute[194781]: 2025-10-02 19:50:41.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:41 compute-0 podman[263821]: 2025-10-02 19:50:41.797144978 +0000 UTC m=+0.161828376 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.032 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.070 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.071 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.072 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.073 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.205 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.305 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.307 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.389 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.400 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.469 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.471 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.540 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.542 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.608 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.612 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.690 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.697 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.768 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.770 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:50:44 compute-0 nova_compute[194781]: 2025-10-02 19:50:44.833 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.312 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.316 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4751MB free_disk=72.34423828125GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.317 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.318 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.488 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.489 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.490 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance ead9703a-68cd-4f65-a0dd-296c0a357b90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.491 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.492 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.574 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing inventories for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.644 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating ProviderTree inventory for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.645 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.662 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing aggregate associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.683 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing trait associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,HW_CPU_X86_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.766 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.783 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.786 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:50:45 compute-0 nova_compute[194781]: 2025-10-02 19:50:45.787 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:50:46 compute-0 nova_compute[194781]: 2025-10-02 19:50:46.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:46 compute-0 nova_compute[194781]: 2025-10-02 19:50:46.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:46 compute-0 nova_compute[194781]: 2025-10-02 19:50:46.785 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:46 compute-0 nova_compute[194781]: 2025-10-02 19:50:46.787 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:50:47.499 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:50:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:50:47.499 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:50:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:50:47.500 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:50:48 compute-0 nova_compute[194781]: 2025-10-02 19:50:48.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:48 compute-0 nova_compute[194781]: 2025-10-02 19:50:48.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:51 compute-0 nova_compute[194781]: 2025-10-02 19:50:51.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:51 compute-0 nova_compute[194781]: 2025-10-02 19:50:51.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:52 compute-0 podman[263869]: 2025-10-02 19:50:52.759405297 +0000 UTC m=+0.120686769 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:50:52 compute-0 podman[263868]: 2025-10-02 19:50:52.79363243 +0000 UTC m=+0.143950279 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:50:53 compute-0 nova_compute[194781]: 2025-10-02 19:50:53.036 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:53 compute-0 nova_compute[194781]: 2025-10-02 19:50:53.037 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:50:53 compute-0 nova_compute[194781]: 2025-10-02 19:50:53.037 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:50:53 compute-0 nova_compute[194781]: 2025-10-02 19:50:53.450 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:50:53 compute-0 nova_compute[194781]: 2025-10-02 19:50:53.451 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:50:53 compute-0 nova_compute[194781]: 2025-10-02 19:50:53.452 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:50:53 compute-0 nova_compute[194781]: 2025-10-02 19:50:53.453 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:50:54 compute-0 podman[263905]: 2025-10-02 19:50:54.734636016 +0000 UTC m=+0.101116618 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.)
Oct 02 19:50:54 compute-0 nova_compute[194781]: 2025-10-02 19:50:54.759 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:50:54 compute-0 podman[263906]: 2025-10-02 19:50:54.760052643 +0000 UTC m=+0.110100236 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:50:54 compute-0 podman[263907]: 2025-10-02 19:50:54.766376212 +0000 UTC m=+0.109463490 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 19:50:54 compute-0 nova_compute[194781]: 2025-10-02 19:50:54.829 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:50:54 compute-0 nova_compute[194781]: 2025-10-02 19:50:54.829 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:50:55 compute-0 nova_compute[194781]: 2025-10-02 19:50:55.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:55 compute-0 nova_compute[194781]: 2025-10-02 19:50:55.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 19:50:55 compute-0 nova_compute[194781]: 2025-10-02 19:50:55.096 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 19:50:56 compute-0 nova_compute[194781]: 2025-10-02 19:50:56.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:56 compute-0 nova_compute[194781]: 2025-10-02 19:50:56.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:50:57 compute-0 nova_compute[194781]: 2025-10-02 19:50:57.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:57 compute-0 nova_compute[194781]: 2025-10-02 19:50:57.115 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:50:57 compute-0 nova_compute[194781]: 2025-10-02 19:50:57.116 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 19:50:59 compute-0 podman[209015]: time="2025-10-02T19:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:50:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:50:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5690 "" "Go-http-client/1.1"
Oct 02 19:51:01 compute-0 nova_compute[194781]: 2025-10-02 19:51:01.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:01 compute-0 nova_compute[194781]: 2025-10-02 19:51:01.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:01 compute-0 openstack_network_exporter[211160]: ERROR   19:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:51:01 compute-0 openstack_network_exporter[211160]: ERROR   19:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:51:01 compute-0 openstack_network_exporter[211160]: ERROR   19:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:51:01 compute-0 openstack_network_exporter[211160]: ERROR   19:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:51:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:51:01 compute-0 openstack_network_exporter[211160]: ERROR   19:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:51:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:51:03 compute-0 podman[263962]: 2025-10-02 19:51:03.753911736 +0000 UTC m=+0.115954152 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:51:03 compute-0 podman[263963]: 2025-10-02 19:51:03.779322664 +0000 UTC m=+0.129518815 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:51:06 compute-0 nova_compute[194781]: 2025-10-02 19:51:06.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:06 compute-0 nova_compute[194781]: 2025-10-02 19:51:06.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:06 compute-0 podman[264003]: 2025-10-02 19:51:06.781570996 +0000 UTC m=+0.144547935 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:51:06 compute-0 podman[264004]: 2025-10-02 19:51:06.822991371 +0000 UTC m=+0.178296186 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller)
Oct 02 19:51:08 compute-0 nova_compute[194781]: 2025-10-02 19:51:08.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:11 compute-0 nova_compute[194781]: 2025-10-02 19:51:11.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:11 compute-0 nova_compute[194781]: 2025-10-02 19:51:11.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:12 compute-0 podman[264044]: 2025-10-02 19:51:12.775921741 +0000 UTC m=+0.132029231 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.949 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.949 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba73cd5e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.962 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f0ac40ea-f3c9-4981-ba99-bfbf34bd253a', 'name': 'te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b43dc593-d176-449d-a8d5-95d53b8e1b5e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3dae65399d7c47999282bff6664f6d16', 'user_id': '23b5415980f24bbbbfa331c702f6f7d9', 'hostId': '298cf1af4dee135a9d0b3050937724c6c926b466f9f6516cf98c662a', 'status': 'active', 'metadata': {'metering.server_group': 'd4713e41-6620-49a4-8665-1b2fbe664d9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.967 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.973 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ead9703a-68cd-4f65-a0dd-296c0a357b90', 'name': 'te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b43dc593-d176-449d-a8d5-95d53b8e1b5e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3dae65399d7c47999282bff6664f6d16', 'user_id': '23b5415980f24bbbbfa331c702f6f7d9', 'hostId': '298cf1af4dee135a9d0b3050937724c6c926b466f9f6516cf98c662a', 'status': 'active', 'metadata': {'metering.server_group': 'd4713e41-6620-49a4-8665-1b2fbe664d9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.973 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.974 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.974 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.974 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:12.976 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:51:12.974440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.005 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/cpu volume: 332520000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.042 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 59230000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.100 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/cpu volume: 318610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.102 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.102 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.103 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.103 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/memory.usage volume: 42.52734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.103 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.104 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/memory.usage volume: 46.99609375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:51:13.103126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.106 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.106 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:51:13.106822) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.113 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.119 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.125 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.127 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.128 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.bytes volume: 1820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.128 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.129 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.bytes volume: 2276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.130 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.130 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:51:13.127734) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.131 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.132 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.132 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.133 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.134 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.134 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.134 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.135 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.135 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.135 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.136 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.136 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:51:13.132358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.136 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.136 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.137 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.137 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.137 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.137 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.137 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.138 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.138 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.138 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.138 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:51:13.134577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:51:13.136463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:51:13.138293) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.186 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.187 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.263 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.263 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.264 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.316 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.bytes volume: 29445120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.317 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.318 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.318 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.318 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.319 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.319 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:51:13.319089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.320 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.320 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.321 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.321 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.321 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.321 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.322 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.322 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.323 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.324 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:51:13.321937) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.326 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:51:13.325128) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.349 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.349 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.387 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.387 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.388 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.411 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.411 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.413 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.413 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.latency volume: 1101066582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:51:13.413656) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.414 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.latency volume: 115063820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.415 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.415 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.415 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.416 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.latency volume: 899748976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.416 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.latency volume: 144179756 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.417 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.417 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.418 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.418 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.418 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.418 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.419 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.419 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.419 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.419 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:51:13.419158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.420 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.420 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.421 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.421 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.421 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.requests volume: 1058 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.422 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.423 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.423 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.423 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.423 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.424 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.424 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.425 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.425 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:51:13.424327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.426 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.426 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.426 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.427 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.427 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.428 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.428 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:51:13.427445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.429 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.429 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.430 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.430 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.431 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:51:13.432378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.432 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.433 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.433 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.434 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.434 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.435 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.bytes volume: 72871936 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.435 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.436 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.437 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.437 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.437 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.437 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.latency volume: 5300249955 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:51:13.437377) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.438 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.438 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.439 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.439 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.440 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.latency volume: 3335552889 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.440 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.441 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.441 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.441 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.442 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.442 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.442 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.442 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.443 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:51:13.442446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.443 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.444 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.444 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.444 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.444 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.445 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.445 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.445 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.445 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.445 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.446 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.446 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.requests volume: 352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.446 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.446 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.447 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.447 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.447 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.requests volume: 320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.447 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.448 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.448 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.448 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.448 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.449 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.449 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.449 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.449 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.449 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.450 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.450 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.450 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.450 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.450 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.451 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.451 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.452 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:51:13.446011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.452 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:51:13.449126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.452 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:51:13.451044) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.452 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.452 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.452 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.452 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.453 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.453 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.454 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:51:13.452833) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.454 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.454 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.454 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.454 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.454 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.455 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.455 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.456 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.456 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:51:13.454465) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.456 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.456 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.457 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.457 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.457 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.458 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.459 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.459 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.459 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.459 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.459 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.459 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.460 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.461 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:51:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:51:13.468 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:51:13.456594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:51:16 compute-0 nova_compute[194781]: 2025-10-02 19:51:16.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:16 compute-0 nova_compute[194781]: 2025-10-02 19:51:16.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:21 compute-0 nova_compute[194781]: 2025-10-02 19:51:21.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:21 compute-0 nova_compute[194781]: 2025-10-02 19:51:21.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:23 compute-0 podman[264069]: 2025-10-02 19:51:23.745922894 +0000 UTC m=+0.093853374 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:51:23 compute-0 podman[264070]: 2025-10-02 19:51:23.748279277 +0000 UTC m=+0.090245638 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Oct 02 19:51:25 compute-0 podman[264106]: 2025-10-02 19:51:25.742304794 +0000 UTC m=+0.107963370 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:51:25 compute-0 podman[264113]: 2025-10-02 19:51:25.77289157 +0000 UTC m=+0.113579450 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 19:51:25 compute-0 podman[264107]: 2025-10-02 19:51:25.775752826 +0000 UTC m=+0.121978183 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc.)
Oct 02 19:51:26 compute-0 nova_compute[194781]: 2025-10-02 19:51:26.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:26 compute-0 nova_compute[194781]: 2025-10-02 19:51:26.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:29 compute-0 podman[209015]: time="2025-10-02T19:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:51:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:51:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5691 "" "Go-http-client/1.1"
Oct 02 19:51:31 compute-0 nova_compute[194781]: 2025-10-02 19:51:31.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:31 compute-0 nova_compute[194781]: 2025-10-02 19:51:31.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:31 compute-0 openstack_network_exporter[211160]: ERROR   19:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:51:31 compute-0 openstack_network_exporter[211160]: ERROR   19:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:51:31 compute-0 openstack_network_exporter[211160]: ERROR   19:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:51:31 compute-0 openstack_network_exporter[211160]: ERROR   19:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:51:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:51:31 compute-0 openstack_network_exporter[211160]: ERROR   19:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:51:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:51:34 compute-0 podman[264163]: 2025-10-02 19:51:34.763208243 +0000 UTC m=+0.119323531 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:51:34 compute-0 podman[264164]: 2025-10-02 19:51:34.791517968 +0000 UTC m=+0.139719295 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 19:51:36 compute-0 nova_compute[194781]: 2025-10-02 19:51:36.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:36 compute-0 nova_compute[194781]: 2025-10-02 19:51:36.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:37 compute-0 podman[264203]: 2025-10-02 19:51:37.77952687 +0000 UTC m=+0.134662491 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 19:51:37 compute-0 podman[264204]: 2025-10-02 19:51:37.820618376 +0000 UTC m=+0.182911108 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:51:40 compute-0 nova_compute[194781]: 2025-10-02 19:51:40.048 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:41 compute-0 nova_compute[194781]: 2025-10-02 19:51:41.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:41 compute-0 nova_compute[194781]: 2025-10-02 19:51:41.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:41 compute-0 nova_compute[194781]: 2025-10-02 19:51:41.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:42 compute-0 nova_compute[194781]: 2025-10-02 19:51:42.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:42 compute-0 nova_compute[194781]: 2025-10-02 19:51:42.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:51:43 compute-0 podman[264245]: 2025-10-02 19:51:43.714922553 +0000 UTC m=+0.075443743 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.074 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.075 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.076 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.076 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.219 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.326 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.328 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.426 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.440 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.537 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.539 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.605 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.607 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.700 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.701 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.772 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.780 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.845 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.847 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:51:44 compute-0 nova_compute[194781]: 2025-10-02 19:51:44.944 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:51:45 compute-0 nova_compute[194781]: 2025-10-02 19:51:45.453 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:51:45 compute-0 nova_compute[194781]: 2025-10-02 19:51:45.454 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4742MB free_disk=72.34423828125GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:51:45 compute-0 nova_compute[194781]: 2025-10-02 19:51:45.454 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:45 compute-0 nova_compute[194781]: 2025-10-02 19:51:45.455 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:45 compute-0 nova_compute[194781]: 2025-10-02 19:51:45.537 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:51:45 compute-0 nova_compute[194781]: 2025-10-02 19:51:45.537 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:51:45 compute-0 nova_compute[194781]: 2025-10-02 19:51:45.537 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance ead9703a-68cd-4f65-a0dd-296c0a357b90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:51:45 compute-0 nova_compute[194781]: 2025-10-02 19:51:45.537 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:51:45 compute-0 nova_compute[194781]: 2025-10-02 19:51:45.538 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:51:45 compute-0 nova_compute[194781]: 2025-10-02 19:51:45.627 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:51:45 compute-0 nova_compute[194781]: 2025-10-02 19:51:45.640 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:51:45 compute-0 nova_compute[194781]: 2025-10-02 19:51:45.641 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:51:45 compute-0 nova_compute[194781]: 2025-10-02 19:51:45.642 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.636 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.638 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.672 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Triggering sync for uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.673 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Triggering sync for uuid f0ac40ea-f3c9-4981-ba99-bfbf34bd253a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.673 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Triggering sync for uuid ead9703a-68cd-4f65-a0dd-296c0a357b90 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.674 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.675 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.675 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.676 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.677 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "ead9703a-68cd-4f65-a0dd-296c0a357b90" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.677 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.728 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.728 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:46 compute-0 nova_compute[194781]: 2025-10-02 19:51:46.758 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:47 compute-0 nova_compute[194781]: 2025-10-02 19:51:47.073 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:51:47.500 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:51:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:51:47.502 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:51:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:51:47.503 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:51:49 compute-0 nova_compute[194781]: 2025-10-02 19:51:49.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:50 compute-0 nova_compute[194781]: 2025-10-02 19:51:50.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:51 compute-0 nova_compute[194781]: 2025-10-02 19:51:51.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:51 compute-0 nova_compute[194781]: 2025-10-02 19:51:51.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:54 compute-0 nova_compute[194781]: 2025-10-02 19:51:54.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:51:54 compute-0 nova_compute[194781]: 2025-10-02 19:51:54.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:51:54 compute-0 nova_compute[194781]: 2025-10-02 19:51:54.448 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:51:54 compute-0 nova_compute[194781]: 2025-10-02 19:51:54.449 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:51:54 compute-0 nova_compute[194781]: 2025-10-02 19:51:54.450 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:51:54 compute-0 podman[264294]: 2025-10-02 19:51:54.725372028 +0000 UTC m=+0.087985478 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Oct 02 19:51:54 compute-0 podman[264295]: 2025-10-02 19:51:54.737104311 +0000 UTC m=+0.089102056 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:51:56 compute-0 nova_compute[194781]: 2025-10-02 19:51:56.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:56 compute-0 nova_compute[194781]: 2025-10-02 19:51:56.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:51:56 compute-0 nova_compute[194781]: 2025-10-02 19:51:56.469 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Updating instance_info_cache with network_info: [{"id": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "address": "fa:16:3e:e2:c6:bd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45b53db0-b1", "ovs_interfaceid": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:51:56 compute-0 nova_compute[194781]: 2025-10-02 19:51:56.488 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:51:56 compute-0 nova_compute[194781]: 2025-10-02 19:51:56.489 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:51:56 compute-0 podman[264331]: 2025-10-02 19:51:56.725289851 +0000 UTC m=+0.092727140 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 02 19:51:56 compute-0 podman[264333]: 2025-10-02 19:51:56.730211918 +0000 UTC m=+0.090385659 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct 02 19:51:56 compute-0 podman[264332]: 2025-10-02 19:51:56.733701029 +0000 UTC m=+0.094632020 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., release-0.7.12=)
Oct 02 19:51:59 compute-0 podman[209015]: time="2025-10-02T19:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:51:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:51:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5696 "" "Go-http-client/1.1"
Oct 02 19:52:01 compute-0 nova_compute[194781]: 2025-10-02 19:52:01.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:01 compute-0 nova_compute[194781]: 2025-10-02 19:52:01.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:01 compute-0 openstack_network_exporter[211160]: ERROR   19:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:52:01 compute-0 openstack_network_exporter[211160]: ERROR   19:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:52:01 compute-0 openstack_network_exporter[211160]: ERROR   19:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:52:01 compute-0 openstack_network_exporter[211160]: ERROR   19:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:52:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:52:01 compute-0 openstack_network_exporter[211160]: ERROR   19:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:52:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:52:05 compute-0 podman[264408]: 2025-10-02 19:52:05.71916257 +0000 UTC m=+0.088461699 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:52:05 compute-0 podman[264409]: 2025-10-02 19:52:05.763918578 +0000 UTC m=+0.116178297 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid)
Oct 02 19:52:06 compute-0 nova_compute[194781]: 2025-10-02 19:52:06.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:06 compute-0 nova_compute[194781]: 2025-10-02 19:52:06.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:08 compute-0 podman[264450]: 2025-10-02 19:52:08.732994217 +0000 UTC m=+0.100377448 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:52:08 compute-0 podman[264451]: 2025-10-02 19:52:08.807481694 +0000 UTC m=+0.160201635 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:52:11 compute-0 nova_compute[194781]: 2025-10-02 19:52:11.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:11 compute-0 nova_compute[194781]: 2025-10-02 19:52:11.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:14 compute-0 podman[264494]: 2025-10-02 19:52:14.77572639 +0000 UTC m=+0.134722017 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:52:16 compute-0 nova_compute[194781]: 2025-10-02 19:52:16.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:16 compute-0 nova_compute[194781]: 2025-10-02 19:52:16.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:21 compute-0 nova_compute[194781]: 2025-10-02 19:52:21.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:21 compute-0 nova_compute[194781]: 2025-10-02 19:52:21.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:25 compute-0 podman[264518]: 2025-10-02 19:52:25.812997507 +0000 UTC m=+0.155136865 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:52:25 compute-0 podman[264517]: 2025-10-02 19:52:25.820127822 +0000 UTC m=+0.165361550 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Oct 02 19:52:26 compute-0 nova_compute[194781]: 2025-10-02 19:52:26.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:26 compute-0 nova_compute[194781]: 2025-10-02 19:52:26.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:27 compute-0 podman[264555]: 2025-10-02 19:52:27.765224897 +0000 UTC m=+0.116275859 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Oct 02 19:52:27 compute-0 podman[264557]: 2025-10-02 19:52:27.767499026 +0000 UTC m=+0.104605127 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:52:27 compute-0 podman[264556]: 2025-10-02 19:52:27.822342115 +0000 UTC m=+0.163843270 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-container, release-0.7.12=, io.openshift.expose-services=, name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, container_name=kepler)
Oct 02 19:52:29 compute-0 podman[209015]: time="2025-10-02T19:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:52:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:52:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5697 "" "Go-http-client/1.1"
Oct 02 19:52:31 compute-0 nova_compute[194781]: 2025-10-02 19:52:31.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:31 compute-0 nova_compute[194781]: 2025-10-02 19:52:31.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:31 compute-0 openstack_network_exporter[211160]: ERROR   19:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:52:31 compute-0 openstack_network_exporter[211160]: ERROR   19:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:52:31 compute-0 openstack_network_exporter[211160]: ERROR   19:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:52:31 compute-0 openstack_network_exporter[211160]: ERROR   19:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:52:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:52:31 compute-0 openstack_network_exporter[211160]: ERROR   19:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:52:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:52:36 compute-0 nova_compute[194781]: 2025-10-02 19:52:36.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:36 compute-0 nova_compute[194781]: 2025-10-02 19:52:36.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:36 compute-0 podman[264614]: 2025-10-02 19:52:36.715377216 +0000 UTC m=+0.082583708 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:52:36 compute-0 podman[264615]: 2025-10-02 19:52:36.735111137 +0000 UTC m=+0.092285729 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 19:52:39 compute-0 podman[264654]: 2025-10-02 19:52:39.76307989 +0000 UTC m=+0.118493087 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:52:39 compute-0 podman[264655]: 2025-10-02 19:52:39.794861882 +0000 UTC m=+0.140916317 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 19:52:41 compute-0 nova_compute[194781]: 2025-10-02 19:52:41.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:41 compute-0 nova_compute[194781]: 2025-10-02 19:52:41.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:41 compute-0 nova_compute[194781]: 2025-10-02 19:52:41.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:42 compute-0 nova_compute[194781]: 2025-10-02 19:52:42.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:42 compute-0 nova_compute[194781]: 2025-10-02 19:52:42.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:42 compute-0 nova_compute[194781]: 2025-10-02 19:52:42.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.066 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.067 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.068 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.069 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.173 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.283 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.285 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.385 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.393 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.491 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.492 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.571 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.572 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.671 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.672 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:52:45 compute-0 podman[264712]: 2025-10-02 19:52:45.704565005 +0000 UTC m=+0.076464029 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.759 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.767 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.844 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.846 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:52:45 compute-0 nova_compute[194781]: 2025-10-02 19:52:45.923 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.406 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.407 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4724MB free_disk=72.34423828125GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.408 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.408 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.524 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.524 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.524 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance ead9703a-68cd-4f65-a0dd-296c0a357b90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.524 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.525 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.626 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.647 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.649 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:52:46 compute-0 nova_compute[194781]: 2025-10-02 19:52:46.650 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:52:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:52:47.501 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:52:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:52:47.502 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:52:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:52:47.504 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:52:48 compute-0 nova_compute[194781]: 2025-10-02 19:52:48.647 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:48 compute-0 nova_compute[194781]: 2025-10-02 19:52:48.648 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:49 compute-0 nova_compute[194781]: 2025-10-02 19:52:49.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:51 compute-0 nova_compute[194781]: 2025-10-02 19:52:51.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:51 compute-0 nova_compute[194781]: 2025-10-02 19:52:51.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:52 compute-0 nova_compute[194781]: 2025-10-02 19:52:52.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:54 compute-0 nova_compute[194781]: 2025-10-02 19:52:54.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:54 compute-0 nova_compute[194781]: 2025-10-02 19:52:54.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:52:54 compute-0 nova_compute[194781]: 2025-10-02 19:52:54.477 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-ead9703a-68cd-4f65-a0dd-296c0a357b90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:52:54 compute-0 nova_compute[194781]: 2025-10-02 19:52:54.477 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-ead9703a-68cd-4f65-a0dd-296c0a357b90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:52:54 compute-0 nova_compute[194781]: 2025-10-02 19:52:54.478 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:52:55 compute-0 nova_compute[194781]: 2025-10-02 19:52:55.660 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Updating instance_info_cache with network_info: [{"id": "722eab1f-2c73-4b59-9732-99ee52407450", "address": "fa:16:3e:c7:57:cd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722eab1f-2c", "ovs_interfaceid": "722eab1f-2c73-4b59-9732-99ee52407450", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:52:55 compute-0 nova_compute[194781]: 2025-10-02 19:52:55.676 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-ead9703a-68cd-4f65-a0dd-296c0a357b90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:52:55 compute-0 nova_compute[194781]: 2025-10-02 19:52:55.677 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:52:56 compute-0 nova_compute[194781]: 2025-10-02 19:52:56.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:56 compute-0 nova_compute[194781]: 2025-10-02 19:52:56.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:52:56 compute-0 podman[264749]: 2025-10-02 19:52:56.752274142 +0000 UTC m=+0.111001473 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, config_id=edpm, org.label-schema.vendor=CentOS)
Oct 02 19:52:56 compute-0 podman[264748]: 2025-10-02 19:52:56.799302679 +0000 UTC m=+0.161311725 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:52:58 compute-0 podman[264790]: 2025-10-02 19:52:58.758993111 +0000 UTC m=+0.117381278 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Oct 02 19:52:58 compute-0 podman[264788]: 2025-10-02 19:52:58.769226556 +0000 UTC m=+0.132225992 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 02 19:52:58 compute-0 podman[264789]: 2025-10-02 19:52:58.791724688 +0000 UTC m=+0.143853433 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vendor=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, architecture=x86_64, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, container_name=kepler)
Oct 02 19:52:59 compute-0 nova_compute[194781]: 2025-10-02 19:52:59.673 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:52:59 compute-0 podman[209015]: time="2025-10-02T19:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:52:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:52:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5690 "" "Go-http-client/1.1"
Oct 02 19:53:01 compute-0 nova_compute[194781]: 2025-10-02 19:53:01.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:01 compute-0 nova_compute[194781]: 2025-10-02 19:53:01.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:01 compute-0 openstack_network_exporter[211160]: ERROR   19:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:53:01 compute-0 openstack_network_exporter[211160]: ERROR   19:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:53:01 compute-0 openstack_network_exporter[211160]: ERROR   19:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:53:01 compute-0 openstack_network_exporter[211160]: ERROR   19:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:53:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:53:01 compute-0 openstack_network_exporter[211160]: ERROR   19:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:53:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:53:06 compute-0 nova_compute[194781]: 2025-10-02 19:53:06.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:06 compute-0 nova_compute[194781]: 2025-10-02 19:53:06.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:07 compute-0 podman[264844]: 2025-10-02 19:53:07.749139042 +0000 UTC m=+0.111119735 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:53:07 compute-0 podman[264845]: 2025-10-02 19:53:07.793074229 +0000 UTC m=+0.153321877 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 19:53:10 compute-0 podman[264885]: 2025-10-02 19:53:10.774670244 +0000 UTC m=+0.125915929 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 19:53:10 compute-0 podman[264886]: 2025-10-02 19:53:10.835583109 +0000 UTC m=+0.177603776 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:53:11 compute-0 nova_compute[194781]: 2025-10-02 19:53:11.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:11 compute-0 nova_compute[194781]: 2025-10-02 19:53:11.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.949 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.950 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.956 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f0ac40ea-f3c9-4981-ba99-bfbf34bd253a', 'name': 'te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b43dc593-d176-449d-a8d5-95d53b8e1b5e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3dae65399d7c47999282bff6664f6d16', 'user_id': '23b5415980f24bbbbfa331c702f6f7d9', 'hostId': '298cf1af4dee135a9d0b3050937724c6c926b466f9f6516cf98c662a', 'status': 'active', 'metadata': {'metering.server_group': 'd4713e41-6620-49a4-8665-1b2fbe664d9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.960 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.964 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ead9703a-68cd-4f65-a0dd-296c0a357b90', 'name': 'te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b43dc593-d176-449d-a8d5-95d53b8e1b5e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3dae65399d7c47999282bff6664f6d16', 'user_id': '23b5415980f24bbbbfa331c702f6f7d9', 'hostId': '298cf1af4dee135a9d0b3050937724c6c926b466f9f6516cf98c662a', 'status': 'active', 'metadata': {'metering.server_group': 'd4713e41-6620-49a4-8665-1b2fbe664d9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.965 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.965 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.965 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.965 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.966 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:53:12.965519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:12.991 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/cpu volume: 334340000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.018 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 61060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.043 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/cpu volume: 331240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.044 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.044 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/memory.usage volume: 42.52734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.044 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.044 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/memory.usage volume: 46.5234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.045 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:53:13.044247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:53:13.045594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.049 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.053 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.056 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.057 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.057 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.bytes volume: 2450 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.058 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.058 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.bytes volume: 2276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.058 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.059 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.059 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.059 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.060 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:53:13.057775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.060 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.061 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.061 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.061 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.062 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.062 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.062 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.062 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:53:13.059281) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.063 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.063 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:53:13.060669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:53:13.062147) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:53:13.063912) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.097 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.097 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.162 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.163 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.163 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.197 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.bytes volume: 30685696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.198 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.198 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.199 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.199 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.199 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:53:13.199371) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.200 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.200 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.200 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.200 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.201 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.201 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.201 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.202 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.202 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.203 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:53:13.201286) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:53:13.203362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.219 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.220 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.246 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.246 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.247 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.261 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.262 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.262 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.263 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.263 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.263 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.263 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.263 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.latency volume: 1101066582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.263 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.latency volume: 115063820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.264 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.264 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.264 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.264 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.latency volume: 943706412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.265 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.latency volume: 153343232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.265 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.265 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:53:13.263450) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.266 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.266 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.266 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.266 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.266 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:53:13.266379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.266 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.266 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.267 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.267 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.267 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.requests volume: 1108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.267 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.268 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.268 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.268 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.268 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.268 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.268 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.269 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.269 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:53:13.268761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.269 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.269 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.270 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.270 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.270 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.270 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.270 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.271 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.271 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:53:13.270320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.271 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.271 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.272 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.272 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.272 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.272 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.272 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:53:13.272698) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.273 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.273 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.273 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.273 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.274 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.bytes volume: 73179136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.274 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.274 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.275 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.275 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.275 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.latency volume: 5300249955 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.275 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.275 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.275 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.276 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:53:13.275069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.276 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.latency volume: 3413901825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.276 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.277 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.277 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.277 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.277 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.277 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.277 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:53:13.277818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.278 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.278 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.278 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.278 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.279 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.279 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.279 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.279 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.279 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.279 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.280 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.280 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.requests volume: 352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.280 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.280 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.280 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:53:13.280047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.280 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.281 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.281 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.requests volume: 345 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.281 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.282 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.282 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.282 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.282 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.282 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.282 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.282 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:53:13.282492) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.282 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.283 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.283 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.283 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.283 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.283 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.283 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.284 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.284 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.284 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.284 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:53:13.283845) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.284 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.285 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:53:13.285047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.285 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.285 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.286 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.286 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.286 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.286 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.286 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:53:13.286230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.286 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.286 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.287 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.287 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.287 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.287 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.287 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:53:13.287663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.287 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.288 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.288 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:53:13.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:53:16 compute-0 nova_compute[194781]: 2025-10-02 19:53:16.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:16 compute-0 nova_compute[194781]: 2025-10-02 19:53:16.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:16 compute-0 podman[264931]: 2025-10-02 19:53:16.719743392 +0000 UTC m=+0.077017183 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:53:21 compute-0 nova_compute[194781]: 2025-10-02 19:53:21.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:21 compute-0 nova_compute[194781]: 2025-10-02 19:53:21.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:26 compute-0 nova_compute[194781]: 2025-10-02 19:53:26.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:26 compute-0 nova_compute[194781]: 2025-10-02 19:53:26.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:27 compute-0 podman[264956]: 2025-10-02 19:53:27.746485639 +0000 UTC m=+0.098700005 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Oct 02 19:53:27 compute-0 podman[264957]: 2025-10-02 19:53:27.767204605 +0000 UTC m=+0.114081653 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:53:29 compute-0 podman[264995]: 2025-10-02 19:53:29.742080931 +0000 UTC m=+0.090754700 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, config_id=edpm, io.openshift.tags=base rhel9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.expose-services=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=)
Oct 02 19:53:29 compute-0 podman[209015]: time="2025-10-02T19:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:53:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:53:29 compute-0 podman[264994]: 2025-10-02 19:53:29.7637103 +0000 UTC m=+0.114647747 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, release=1755695350, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, name=ubi9-minimal, version=9.6, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Oct 02 19:53:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5696 "" "Go-http-client/1.1"
Oct 02 19:53:29 compute-0 podman[264996]: 2025-10-02 19:53:29.797096374 +0000 UTC m=+0.133061304 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:53:31 compute-0 nova_compute[194781]: 2025-10-02 19:53:31.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:31 compute-0 nova_compute[194781]: 2025-10-02 19:53:31.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:31 compute-0 openstack_network_exporter[211160]: ERROR   19:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:53:31 compute-0 openstack_network_exporter[211160]: ERROR   19:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:53:31 compute-0 openstack_network_exporter[211160]: ERROR   19:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:53:31 compute-0 openstack_network_exporter[211160]: ERROR   19:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:53:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:53:31 compute-0 openstack_network_exporter[211160]: ERROR   19:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:53:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:53:36 compute-0 nova_compute[194781]: 2025-10-02 19:53:36.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:36 compute-0 nova_compute[194781]: 2025-10-02 19:53:36.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:38 compute-0 podman[265048]: 2025-10-02 19:53:38.753603506 +0000 UTC m=+0.113277322 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:53:38 compute-0 podman[265049]: 2025-10-02 19:53:38.786546108 +0000 UTC m=+0.135027114 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 19:53:41 compute-0 nova_compute[194781]: 2025-10-02 19:53:41.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:41 compute-0 nova_compute[194781]: 2025-10-02 19:53:41.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:41 compute-0 nova_compute[194781]: 2025-10-02 19:53:41.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:41 compute-0 podman[265090]: 2025-10-02 19:53:41.804666356 +0000 UTC m=+0.154580071 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:53:41 compute-0 podman[265091]: 2025-10-02 19:53:41.847847823 +0000 UTC m=+0.190904210 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:53:42 compute-0 nova_compute[194781]: 2025-10-02 19:53:42.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:42 compute-0 nova_compute[194781]: 2025-10-02 19:53:42.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:53:44 compute-0 nova_compute[194781]: 2025-10-02 19:53:44.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.069 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.070 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.070 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.070 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.172 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.235 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.235 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.311 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.323 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.396 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.398 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.495 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.497 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.561 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.562 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.671 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.682 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.779 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.781 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:53:46 compute-0 nova_compute[194781]: 2025-10-02 19:53:46.844 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:53:47 compute-0 nova_compute[194781]: 2025-10-02 19:53:47.318 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:53:47 compute-0 nova_compute[194781]: 2025-10-02 19:53:47.319 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4701MB free_disk=72.34428405761719GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:53:47 compute-0 nova_compute[194781]: 2025-10-02 19:53:47.319 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:53:47 compute-0 nova_compute[194781]: 2025-10-02 19:53:47.320 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:53:47 compute-0 nova_compute[194781]: 2025-10-02 19:53:47.391 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:53:47 compute-0 nova_compute[194781]: 2025-10-02 19:53:47.391 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:53:47 compute-0 nova_compute[194781]: 2025-10-02 19:53:47.392 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance ead9703a-68cd-4f65-a0dd-296c0a357b90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:53:47 compute-0 nova_compute[194781]: 2025-10-02 19:53:47.392 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:53:47 compute-0 nova_compute[194781]: 2025-10-02 19:53:47.392 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:53:47 compute-0 nova_compute[194781]: 2025-10-02 19:53:47.474 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:53:47 compute-0 nova_compute[194781]: 2025-10-02 19:53:47.491 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:53:47 compute-0 nova_compute[194781]: 2025-10-02 19:53:47.494 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:53:47 compute-0 nova_compute[194781]: 2025-10-02 19:53:47.495 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:53:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:53:47.503 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:53:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:53:47.504 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:53:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:53:47.505 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:53:47 compute-0 podman[265156]: 2025-10-02 19:53:47.746781906 +0000 UTC m=+0.107410570 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:53:49 compute-0 nova_compute[194781]: 2025-10-02 19:53:49.490 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:49 compute-0 nova_compute[194781]: 2025-10-02 19:53:49.491 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:51 compute-0 nova_compute[194781]: 2025-10-02 19:53:51.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:51 compute-0 nova_compute[194781]: 2025-10-02 19:53:51.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:51 compute-0 nova_compute[194781]: 2025-10-02 19:53:51.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:54 compute-0 nova_compute[194781]: 2025-10-02 19:53:54.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:54 compute-0 nova_compute[194781]: 2025-10-02 19:53:54.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:53:54 compute-0 nova_compute[194781]: 2025-10-02 19:53:54.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:53:54 compute-0 nova_compute[194781]: 2025-10-02 19:53:54.551 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:53:54 compute-0 nova_compute[194781]: 2025-10-02 19:53:54.552 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:53:54 compute-0 nova_compute[194781]: 2025-10-02 19:53:54.552 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:53:54 compute-0 nova_compute[194781]: 2025-10-02 19:53:54.553 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:53:55 compute-0 nova_compute[194781]: 2025-10-02 19:53:55.830 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:53:55 compute-0 nova_compute[194781]: 2025-10-02 19:53:55.847 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:53:55 compute-0 nova_compute[194781]: 2025-10-02 19:53:55.848 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:53:55 compute-0 nova_compute[194781]: 2025-10-02 19:53:55.849 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:53:56 compute-0 nova_compute[194781]: 2025-10-02 19:53:56.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:56 compute-0 nova_compute[194781]: 2025-10-02 19:53:56.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:53:58 compute-0 podman[265181]: 2025-10-02 19:53:58.744154034 +0000 UTC m=+0.111305191 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:53:58 compute-0 podman[265182]: 2025-10-02 19:53:58.756521184 +0000 UTC m=+0.122615363 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:53:59 compute-0 podman[209015]: time="2025-10-02T19:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:53:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:53:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5695 "" "Go-http-client/1.1"
Oct 02 19:54:00 compute-0 podman[265218]: 2025-10-02 19:54:00.753865562 +0000 UTC m=+0.106111907 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-type=git)
Oct 02 19:54:00 compute-0 podman[265219]: 2025-10-02 19:54:00.782629696 +0000 UTC m=+0.128220769 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, release-0.7.12=, distribution-scope=public, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct 02 19:54:00 compute-0 podman[265220]: 2025-10-02 19:54:00.789325549 +0000 UTC m=+0.129348407 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:54:01 compute-0 nova_compute[194781]: 2025-10-02 19:54:01.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:01 compute-0 nova_compute[194781]: 2025-10-02 19:54:01.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:01 compute-0 openstack_network_exporter[211160]: ERROR   19:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:54:01 compute-0 openstack_network_exporter[211160]: ERROR   19:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:54:01 compute-0 openstack_network_exporter[211160]: ERROR   19:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:54:01 compute-0 openstack_network_exporter[211160]: ERROR   19:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:54:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:54:01 compute-0 openstack_network_exporter[211160]: ERROR   19:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:54:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:54:06 compute-0 nova_compute[194781]: 2025-10-02 19:54:06.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:06 compute-0 nova_compute[194781]: 2025-10-02 19:54:06.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:09 compute-0 podman[265274]: 2025-10-02 19:54:09.722496908 +0000 UTC m=+0.082824264 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:54:09 compute-0 podman[265275]: 2025-10-02 19:54:09.723788321 +0000 UTC m=+0.085967165 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct 02 19:54:11 compute-0 nova_compute[194781]: 2025-10-02 19:54:11.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:11 compute-0 nova_compute[194781]: 2025-10-02 19:54:11.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:12 compute-0 podman[265316]: 2025-10-02 19:54:12.727133026 +0000 UTC m=+0.101032465 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:54:12 compute-0 podman[265317]: 2025-10-02 19:54:12.81616699 +0000 UTC m=+0.167825953 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:54:16 compute-0 nova_compute[194781]: 2025-10-02 19:54:16.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:16 compute-0 nova_compute[194781]: 2025-10-02 19:54:16.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:18 compute-0 podman[265360]: 2025-10-02 19:54:18.699745846 +0000 UTC m=+0.066834450 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:54:21 compute-0 nova_compute[194781]: 2025-10-02 19:54:21.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:21 compute-0 nova_compute[194781]: 2025-10-02 19:54:21.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:26 compute-0 nova_compute[194781]: 2025-10-02 19:54:26.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:26 compute-0 nova_compute[194781]: 2025-10-02 19:54:26.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:29 compute-0 podman[265385]: 2025-10-02 19:54:29.714390427 +0000 UTC m=+0.080500704 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:54:29 compute-0 podman[265384]: 2025-10-02 19:54:29.733235104 +0000 UTC m=+0.101110387 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:54:29 compute-0 podman[209015]: time="2025-10-02T19:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:54:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:54:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5693 "" "Go-http-client/1.1"
Oct 02 19:54:31 compute-0 nova_compute[194781]: 2025-10-02 19:54:31.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:31 compute-0 nova_compute[194781]: 2025-10-02 19:54:31.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:31 compute-0 openstack_network_exporter[211160]: ERROR   19:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:54:31 compute-0 openstack_network_exporter[211160]: ERROR   19:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:54:31 compute-0 openstack_network_exporter[211160]: ERROR   19:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:54:31 compute-0 openstack_network_exporter[211160]: ERROR   19:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:54:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:54:31 compute-0 openstack_network_exporter[211160]: ERROR   19:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:54:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:54:31 compute-0 podman[265423]: 2025-10-02 19:54:31.762937109 +0000 UTC m=+0.111642199 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 19:54:31 compute-0 podman[265421]: 2025-10-02 19:54:31.765877785 +0000 UTC m=+0.126037042 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 02 19:54:31 compute-0 podman[265422]: 2025-10-02 19:54:31.783777348 +0000 UTC m=+0.151007458 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, io.openshift.expose-services=, container_name=kepler, maintainer=Red Hat, Inc., distribution-scope=public, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct 02 19:54:36 compute-0 nova_compute[194781]: 2025-10-02 19:54:36.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:36 compute-0 nova_compute[194781]: 2025-10-02 19:54:36.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:40 compute-0 podman[265478]: 2025-10-02 19:54:40.742330934 +0000 UTC m=+0.096755904 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 19:54:40 compute-0 podman[265479]: 2025-10-02 19:54:40.783785147 +0000 UTC m=+0.136039931 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 19:54:41 compute-0 nova_compute[194781]: 2025-10-02 19:54:41.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:41 compute-0 nova_compute[194781]: 2025-10-02 19:54:41.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:42 compute-0 nova_compute[194781]: 2025-10-02 19:54:42.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:42 compute-0 nova_compute[194781]: 2025-10-02 19:54:42.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:54:43 compute-0 nova_compute[194781]: 2025-10-02 19:54:43.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:43 compute-0 podman[265518]: 2025-10-02 19:54:43.722954521 +0000 UTC m=+0.089093236 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:54:43 compute-0 podman[265519]: 2025-10-02 19:54:43.77392355 +0000 UTC m=+0.128163377 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller)
Oct 02 19:54:46 compute-0 nova_compute[194781]: 2025-10-02 19:54:46.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:46 compute-0 nova_compute[194781]: 2025-10-02 19:54:46.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:46 compute-0 nova_compute[194781]: 2025-10-02 19:54:46.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:54:47.504 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:54:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:54:47.506 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:54:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:54:47.508 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.061 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.062 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.062 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.063 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.167 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.248 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.249 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.310 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.322 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.383 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.385 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.444 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.445 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.512 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.513 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.585 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.597 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.695 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.697 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:54:48 compute-0 nova_compute[194781]: 2025-10-02 19:54:48.792 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:54:49 compute-0 nova_compute[194781]: 2025-10-02 19:54:49.219 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:54:49 compute-0 nova_compute[194781]: 2025-10-02 19:54:49.221 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4679MB free_disk=72.34433364868164GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:54:49 compute-0 nova_compute[194781]: 2025-10-02 19:54:49.221 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:54:49 compute-0 nova_compute[194781]: 2025-10-02 19:54:49.222 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:54:49 compute-0 nova_compute[194781]: 2025-10-02 19:54:49.368 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:54:49 compute-0 nova_compute[194781]: 2025-10-02 19:54:49.368 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:54:49 compute-0 nova_compute[194781]: 2025-10-02 19:54:49.368 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance ead9703a-68cd-4f65-a0dd-296c0a357b90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:54:49 compute-0 nova_compute[194781]: 2025-10-02 19:54:49.368 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:54:49 compute-0 nova_compute[194781]: 2025-10-02 19:54:49.369 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:54:49 compute-0 nova_compute[194781]: 2025-10-02 19:54:49.448 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:54:49 compute-0 nova_compute[194781]: 2025-10-02 19:54:49.463 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:54:49 compute-0 nova_compute[194781]: 2025-10-02 19:54:49.464 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:54:49 compute-0 nova_compute[194781]: 2025-10-02 19:54:49.465 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:54:49 compute-0 podman[265586]: 2025-10-02 19:54:49.732343901 +0000 UTC m=+0.104329370 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:54:51 compute-0 nova_compute[194781]: 2025-10-02 19:54:51.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:51 compute-0 nova_compute[194781]: 2025-10-02 19:54:51.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:51 compute-0 nova_compute[194781]: 2025-10-02 19:54:51.460 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:51 compute-0 nova_compute[194781]: 2025-10-02 19:54:51.460 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:51 compute-0 nova_compute[194781]: 2025-10-02 19:54:51.460 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:55 compute-0 nova_compute[194781]: 2025-10-02 19:54:55.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:56 compute-0 nova_compute[194781]: 2025-10-02 19:54:56.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:54:56 compute-0 nova_compute[194781]: 2025-10-02 19:54:56.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:54:56 compute-0 nova_compute[194781]: 2025-10-02 19:54:56.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:56 compute-0 nova_compute[194781]: 2025-10-02 19:54:56.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:54:56 compute-0 nova_compute[194781]: 2025-10-02 19:54:56.566 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:54:56 compute-0 nova_compute[194781]: 2025-10-02 19:54:56.567 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:54:56 compute-0 nova_compute[194781]: 2025-10-02 19:54:56.567 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:54:59 compute-0 podman[209015]: time="2025-10-02T19:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:54:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:54:59 compute-0 nova_compute[194781]: 2025-10-02 19:54:59.763 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Updating instance_info_cache with network_info: [{"id": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "address": "fa:16:3e:e2:c6:bd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45b53db0-b1", "ovs_interfaceid": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:54:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5700 "" "Go-http-client/1.1"
Oct 02 19:54:59 compute-0 nova_compute[194781]: 2025-10-02 19:54:59.795 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:54:59 compute-0 nova_compute[194781]: 2025-10-02 19:54:59.796 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:55:00 compute-0 podman[265609]: 2025-10-02 19:55:00.779563716 +0000 UTC m=+0.130079746 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 19:55:00 compute-0 podman[265608]: 2025-10-02 19:55:00.804102281 +0000 UTC m=+0.167997688 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 19:55:01 compute-0 nova_compute[194781]: 2025-10-02 19:55:01.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:01 compute-0 nova_compute[194781]: 2025-10-02 19:55:01.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:01 compute-0 openstack_network_exporter[211160]: ERROR   19:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:55:01 compute-0 openstack_network_exporter[211160]: ERROR   19:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:55:01 compute-0 openstack_network_exporter[211160]: ERROR   19:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:55:01 compute-0 openstack_network_exporter[211160]: ERROR   19:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:55:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:55:01 compute-0 openstack_network_exporter[211160]: ERROR   19:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:55:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:55:02 compute-0 podman[265647]: 2025-10-02 19:55:02.747305267 +0000 UTC m=+0.109975657 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, name=ubi9, config_id=edpm, version=9.4, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct 02 19:55:02 compute-0 podman[265648]: 2025-10-02 19:55:02.751935116 +0000 UTC m=+0.099897085 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 19:55:02 compute-0 podman[265646]: 2025-10-02 19:55:02.756230168 +0000 UTC m=+0.115625233 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vcs-type=git, architecture=x86_64)
Oct 02 19:55:02 compute-0 nova_compute[194781]: 2025-10-02 19:55:02.791 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:06 compute-0 nova_compute[194781]: 2025-10-02 19:55:06.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:06 compute-0 nova_compute[194781]: 2025-10-02 19:55:06.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:11 compute-0 nova_compute[194781]: 2025-10-02 19:55:11.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:11 compute-0 nova_compute[194781]: 2025-10-02 19:55:11.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:11 compute-0 podman[265704]: 2025-10-02 19:55:11.784078597 +0000 UTC m=+0.145783163 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:55:11 compute-0 podman[265705]: 2025-10-02 19:55:11.784295312 +0000 UTC m=+0.134753137 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.950 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.950 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.965 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f0ac40ea-f3c9-4981-ba99-bfbf34bd253a', 'name': 'te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b43dc593-d176-449d-a8d5-95d53b8e1b5e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3dae65399d7c47999282bff6664f6d16', 'user_id': '23b5415980f24bbbbfa331c702f6f7d9', 'hostId': '298cf1af4dee135a9d0b3050937724c6c926b466f9f6516cf98c662a', 'status': 'active', 'metadata': {'metering.server_group': 'd4713e41-6620-49a4-8665-1b2fbe664d9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.970 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.975 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ead9703a-68cd-4f65-a0dd-296c0a357b90', 'name': 'te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta', 'flavor': {'id': '7ab5ea96-81dd-4496-8a1f-012f7d2c53c5', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b43dc593-d176-449d-a8d5-95d53b8e1b5e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3dae65399d7c47999282bff6664f6d16', 'user_id': '23b5415980f24bbbbfa331c702f6f7d9', 'hostId': '298cf1af4dee135a9d0b3050937724c6c926b466f9f6516cf98c662a', 'status': 'active', 'metadata': {'metering.server_group': 'd4713e41-6620-49a4-8665-1b2fbe664d9c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.976 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.976 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.976 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.977 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:12.978 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:55:12.977137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.025 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/cpu volume: 336140000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.063 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 62900000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.092 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/cpu volume: 333060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.093 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.093 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.093 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.093 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.093 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.094 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/memory.usage volume: 42.52734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.094 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.094 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/memory.usage volume: 46.5234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.095 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.095 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.095 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.095 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:55:13.093861) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:55:13.095693) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.101 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.107 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.112 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.113 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.113 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.113 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.114 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.114 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.bytes volume: 2450 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.114 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.115 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.bytes volume: 2276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.115 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.116 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.116 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.116 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.116 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.116 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.117 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:55:13.113970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.117 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:55:13.116753) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.118 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.119 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.119 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.119 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.119 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.120 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.120 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.121 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.122 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.122 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.122 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.123 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.123 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:55:13.119587) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.123 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.123 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:55:13.122679) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.125 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:55:13.125598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.176 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.177 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.248 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.249 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.250 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.302 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.bytes volume: 30685696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.303 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.304 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.304 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.304 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.304 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.304 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.305 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.305 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.305 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.306 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.306 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.306 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.307 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.307 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.308 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:55:13.304712) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:55:13.306425) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:55:13.308134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.332 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.333 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.378 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.379 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.379 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.406 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.407 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.409 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.409 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.410 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.latency volume: 1101066582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.410 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:55:13.410165) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.411 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.latency volume: 115063820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.411 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.412 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.413 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.413 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.latency volume: 943706412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.414 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.latency volume: 153343232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.415 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.415 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.415 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.415 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.416 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.416 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.416 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.416 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.417 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.417 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:55:13.416513) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.417 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.418 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.418 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.419 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.419 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.requests volume: 1108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.420 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.421 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.421 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.422 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.422 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.422 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.423 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.423 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:55:13.422830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.424 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.424 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.425 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.426 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.426 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.426 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.426 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.426 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.427 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.428 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.429 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:55:13.426690) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.429 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.429 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.430 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.431 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.432 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.433 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.433 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.433 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.434 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.434 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.435 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:55:13.433155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.436 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.436 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.bytes volume: 73179136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.437 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.438 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.438 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.438 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.439 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.439 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.439 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.439 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.latency volume: 5300249955 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:55:13.439548) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.440 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.441 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.441 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.442 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.442 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.latency volume: 3413901825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.443 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.444 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.444 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.444 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.445 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.445 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.445 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.445 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.446 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.447 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.447 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.448 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.448 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.449 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.450 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:55:13.445604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.450 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.451 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.451 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.451 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.451 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.452 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.452 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.requests volume: 352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.452 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.453 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:55:13.451989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.454 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.454 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.454 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.requests volume: 345 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.455 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.456 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.457 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.457 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.457 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.457 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.458 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.458 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.459 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.460 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.460 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:55:13.457503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.460 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.460 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.461 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.462 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.462 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:55:13.460761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.462 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.462 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.463 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.465 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.465 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.465 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.465 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:55:13.462619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.465 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.466 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.466 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.466 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.467 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.467 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.467 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.467 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.467 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.468 14 DEBUG ceilometer.compute.pollsters [-] f0ac40ea-f3c9-4981-ba99-bfbf34bd253a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.468 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.468 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:55:13.465822) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.468 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:55:13.468023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.469 14 DEBUG ceilometer.compute.pollsters [-] ead9703a-68cd-4f65-a0dd-296c0a357b90/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.469 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:55:13.475 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:55:14 compute-0 podman[265747]: 2025-10-02 19:55:14.783306157 +0000 UTC m=+0.150156526 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:55:14 compute-0 podman[265748]: 2025-10-02 19:55:14.880106501 +0000 UTC m=+0.233272186 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 19:55:16 compute-0 nova_compute[194781]: 2025-10-02 19:55:16.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:16 compute-0 nova_compute[194781]: 2025-10-02 19:55:16.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:20 compute-0 podman[265792]: 2025-10-02 19:55:20.761635374 +0000 UTC m=+0.107184584 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:55:21 compute-0 nova_compute[194781]: 2025-10-02 19:55:21.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:21 compute-0 nova_compute[194781]: 2025-10-02 19:55:21.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:26 compute-0 nova_compute[194781]: 2025-10-02 19:55:26.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:26 compute-0 nova_compute[194781]: 2025-10-02 19:55:26.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:29 compute-0 podman[209015]: time="2025-10-02T19:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:55:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33214 "" "Go-http-client/1.1"
Oct 02 19:55:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5696 "" "Go-http-client/1.1"
Oct 02 19:55:31 compute-0 nova_compute[194781]: 2025-10-02 19:55:31.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:31 compute-0 openstack_network_exporter[211160]: ERROR   19:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:55:31 compute-0 openstack_network_exporter[211160]: ERROR   19:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:55:31 compute-0 openstack_network_exporter[211160]: ERROR   19:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:55:31 compute-0 openstack_network_exporter[211160]: ERROR   19:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:55:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:55:31 compute-0 openstack_network_exporter[211160]: ERROR   19:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:55:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:55:31 compute-0 nova_compute[194781]: 2025-10-02 19:55:31.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:31 compute-0 podman[265815]: 2025-10-02 19:55:31.784785276 +0000 UTC m=+0.135232070 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Oct 02 19:55:31 compute-0 podman[265814]: 2025-10-02 19:55:31.788925493 +0000 UTC m=+0.144350616 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001)
Oct 02 19:55:33 compute-0 podman[265850]: 2025-10-02 19:55:33.760542165 +0000 UTC m=+0.111192248 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, maintainer=Red Hat, Inc., release-0.7.12=, io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:55:33 compute-0 podman[265849]: 2025-10-02 19:55:33.765548145 +0000 UTC m=+0.120962401 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Oct 02 19:55:33 compute-0 podman[265851]: 2025-10-02 19:55:33.820724632 +0000 UTC m=+0.163794499 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 19:55:36 compute-0 nova_compute[194781]: 2025-10-02 19:55:36.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:36 compute-0 nova_compute[194781]: 2025-10-02 19:55:36.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:37 compute-0 nova_compute[194781]: 2025-10-02 19:55:37.851 2 DEBUG oslo_concurrency.lockutils [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:37 compute-0 nova_compute[194781]: 2025-10-02 19:55:37.852 2 DEBUG oslo_concurrency.lockutils [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:37 compute-0 nova_compute[194781]: 2025-10-02 19:55:37.852 2 DEBUG oslo_concurrency.lockutils [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:37 compute-0 nova_compute[194781]: 2025-10-02 19:55:37.852 2 DEBUG oslo_concurrency.lockutils [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:37 compute-0 nova_compute[194781]: 2025-10-02 19:55:37.853 2 DEBUG oslo_concurrency.lockutils [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:37 compute-0 nova_compute[194781]: 2025-10-02 19:55:37.854 2 INFO nova.compute.manager [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Terminating instance
Oct 02 19:55:37 compute-0 nova_compute[194781]: 2025-10-02 19:55:37.855 2 DEBUG nova.compute.manager [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:55:37 compute-0 kernel: tap45b53db0-b1 (unregistering): left promiscuous mode
Oct 02 19:55:37 compute-0 NetworkManager[52324]: <info>  [1759434937.9214] device (tap45b53db0-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:55:37 compute-0 nova_compute[194781]: 2025-10-02 19:55:37.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:37 compute-0 ovn_controller[97052]: 2025-10-02T19:55:37Z|00193|binding|INFO|Releasing lport 45b53db0-b1f5-401e-8a98-c127ada04a9c from this chassis (sb_readonly=0)
Oct 02 19:55:37 compute-0 ovn_controller[97052]: 2025-10-02T19:55:37Z|00194|binding|INFO|Setting lport 45b53db0-b1f5-401e-8a98-c127ada04a9c down in Southbound
Oct 02 19:55:37 compute-0 ovn_controller[97052]: 2025-10-02T19:55:37Z|00195|binding|INFO|Removing iface tap45b53db0-b1 ovn-installed in OVS
Oct 02 19:55:37 compute-0 nova_compute[194781]: 2025-10-02 19:55:37.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:37 compute-0 nova_compute[194781]: 2025-10-02 19:55:37.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:37 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:37.986 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:c6:bd 10.100.2.28'], port_security=['fa:16:3e:e2:c6:bd 10.100.2.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.28/16', 'neutron:device_id': 'f0ac40ea-f3c9-4981-ba99-bfbf34bd253a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3dae65399d7c47999282bff6664f6d16', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb16109a-6359-4dd8-bfae-0a7015239961', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31c9bff4-971d-41c4-a82c-3f2067f94d21, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=45b53db0-b1f5-401e-8a98-c127ada04a9c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:55:37 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:37.989 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 45b53db0-b1f5-401e-8a98-c127ada04a9c in datapath b8407621-6f3e-4864-b018-8cf0d0e8428e unbound from our chassis
Oct 02 19:55:37 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:37.993 105943 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8407621-6f3e-4864-b018-8cf0d0e8428e
Oct 02 19:55:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:38.020 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[28492f20-4034-46e5-ae1a-5b5c11ff8b1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:38 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 02 19:55:38 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 7min 6.902s CPU time.
Oct 02 19:55:38 compute-0 systemd-machined[154795]: Machine qemu-11-instance-0000000b terminated.
Oct 02 19:55:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:38.060 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[66126f58-4397-4aec-bbe3-cdceeccbf426]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:38.067 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[9514a8c6-54ca-4c86-9661-744a4070248d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:38.108 246930 DEBUG oslo.privsep.daemon [-] privsep: reply[4803289f-6fbd-4f84-ae2b-ff3f694a6fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:38.128 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[5888b1a2-554b-44ce-9fd5-1903d82a074e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8407621-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:a6:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 8, 'rx_bytes': 2260, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 8, 'rx_bytes': 2260, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535296, 'reachable_time': 23010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265935, 'error': None, 'target': 'ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.132 2 INFO nova.virt.libvirt.driver [-] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Instance destroyed successfully.
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.133 2 DEBUG nova.objects.instance [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lazy-loading 'resources' on Instance uuid f0ac40ea-f3c9-4981-ba99-bfbf34bd253a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:55:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:38.151 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[f9c50e12-292c-4962-bdd2-ca34c65762ff]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapb8407621-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535314, 'tstamp': 535314}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265937, 'error': None, 'target': 'ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb8407621-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535319, 'tstamp': 535319}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265937, 'error': None, 'target': 'ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:38.152 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8407621-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:38.160 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8407621-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:55:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:38.160 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:55:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:38.160 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8407621-60, col_values=(('external_ids', {'iface-id': 'aaa6ea3c-0164-44d4-b435-0c6c04e73e3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:55:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:38.161 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.198 2 DEBUG nova.virt.libvirt.vif [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:43:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-2850012-asg-2udtsluunakm-raep7ui33wxe-fuhkpbvwtudg',id=11,image_ref='b43dc593-d176-449d-a8d5-95d53b8e1b5e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:43:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='d4713e41-6620-49a4-8665-1b2fbe664d9c'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3dae65399d7c47999282bff6664f6d16',ramdisk_id='',reservation_id='r-35d7ip07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b43dc593-d176-449d-a8d5-95d53b8e1b5e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-732152950',owner_user_name='tempest-PrometheusGabbiTest-732152950-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:43:21Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='23b5415980f24bbbbfa331c702f6f7d9',uuid=f0ac40ea-f3c9-4981-ba99-bfbf34bd253a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "address": "fa:16:3e:e2:c6:bd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45b53db0-b1", "ovs_interfaceid": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.199 2 DEBUG nova.network.os_vif_util [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Converting VIF {"id": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "address": "fa:16:3e:e2:c6:bd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45b53db0-b1", "ovs_interfaceid": "45b53db0-b1f5-401e-8a98-c127ada04a9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.200 2 DEBUG nova.network.os_vif_util [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:c6:bd,bridge_name='br-int',has_traffic_filtering=True,id=45b53db0-b1f5-401e-8a98-c127ada04a9c,network=Network(b8407621-6f3e-4864-b018-8cf0d0e8428e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45b53db0-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.201 2 DEBUG os_vif [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:c6:bd,bridge_name='br-int',has_traffic_filtering=True,id=45b53db0-b1f5-401e-8a98-c127ada04a9c,network=Network(b8407621-6f3e-4864-b018-8cf0d0e8428e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45b53db0-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.205 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45b53db0-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.215 2 INFO os_vif [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:c6:bd,bridge_name='br-int',has_traffic_filtering=True,id=45b53db0-b1f5-401e-8a98-c127ada04a9c,network=Network(b8407621-6f3e-4864-b018-8cf0d0e8428e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45b53db0-b1')
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.216 2 INFO nova.virt.libvirt.driver [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Deleting instance files /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a_del
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.217 2 INFO nova.virt.libvirt.driver [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Deletion of /var/lib/nova/instances/f0ac40ea-f3c9-4981-ba99-bfbf34bd253a_del complete
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.363 2 INFO nova.compute.manager [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Took 0.51 seconds to destroy the instance on the hypervisor.
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.364 2 DEBUG oslo.service.loopingcall [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.365 2 DEBUG nova.compute.manager [-] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.365 2 DEBUG nova.network.neutron [-] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.572 2 DEBUG nova.compute.manager [req-70af6aad-a4bf-48f1-89d2-8f50470c8f1c req-2f1848bb-3003-43b0-b115-2000382b78d6 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Received event network-vif-unplugged-45b53db0-b1f5-401e-8a98-c127ada04a9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.573 2 DEBUG oslo_concurrency.lockutils [req-70af6aad-a4bf-48f1-89d2-8f50470c8f1c req-2f1848bb-3003-43b0-b115-2000382b78d6 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.573 2 DEBUG oslo_concurrency.lockutils [req-70af6aad-a4bf-48f1-89d2-8f50470c8f1c req-2f1848bb-3003-43b0-b115-2000382b78d6 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.573 2 DEBUG oslo_concurrency.lockutils [req-70af6aad-a4bf-48f1-89d2-8f50470c8f1c req-2f1848bb-3003-43b0-b115-2000382b78d6 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.574 2 DEBUG nova.compute.manager [req-70af6aad-a4bf-48f1-89d2-8f50470c8f1c req-2f1848bb-3003-43b0-b115-2000382b78d6 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] No waiting events found dispatching network-vif-unplugged-45b53db0-b1f5-401e-8a98-c127ada04a9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.574 2 DEBUG nova.compute.manager [req-70af6aad-a4bf-48f1-89d2-8f50470c8f1c req-2f1848bb-3003-43b0-b115-2000382b78d6 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Received event network-vif-unplugged-45b53db0-b1f5-401e-8a98-c127ada04a9c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 19:55:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:38.592 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:55:38 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:38.593 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 19:55:38 compute-0 nova_compute[194781]: 2025-10-02 19:55:38.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:40 compute-0 nova_compute[194781]: 2025-10-02 19:55:40.677 2 DEBUG nova.compute.manager [req-c6df4645-0c54-41b7-b072-240d92c25c22 req-4ceaadfa-9906-4481-909b-059d44ebbc8c fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Received event network-vif-plugged-45b53db0-b1f5-401e-8a98-c127ada04a9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:55:40 compute-0 nova_compute[194781]: 2025-10-02 19:55:40.678 2 DEBUG oslo_concurrency.lockutils [req-c6df4645-0c54-41b7-b072-240d92c25c22 req-4ceaadfa-9906-4481-909b-059d44ebbc8c fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:40 compute-0 nova_compute[194781]: 2025-10-02 19:55:40.678 2 DEBUG oslo_concurrency.lockutils [req-c6df4645-0c54-41b7-b072-240d92c25c22 req-4ceaadfa-9906-4481-909b-059d44ebbc8c fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:40 compute-0 nova_compute[194781]: 2025-10-02 19:55:40.678 2 DEBUG oslo_concurrency.lockutils [req-c6df4645-0c54-41b7-b072-240d92c25c22 req-4ceaadfa-9906-4481-909b-059d44ebbc8c fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:40 compute-0 nova_compute[194781]: 2025-10-02 19:55:40.679 2 DEBUG nova.compute.manager [req-c6df4645-0c54-41b7-b072-240d92c25c22 req-4ceaadfa-9906-4481-909b-059d44ebbc8c fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] No waiting events found dispatching network-vif-plugged-45b53db0-b1f5-401e-8a98-c127ada04a9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:55:40 compute-0 nova_compute[194781]: 2025-10-02 19:55:40.679 2 WARNING nova.compute.manager [req-c6df4645-0c54-41b7-b072-240d92c25c22 req-4ceaadfa-9906-4481-909b-059d44ebbc8c fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Received unexpected event network-vif-plugged-45b53db0-b1f5-401e-8a98-c127ada04a9c for instance with vm_state active and task_state deleting.
Oct 02 19:55:40 compute-0 nova_compute[194781]: 2025-10-02 19:55:40.832 2 DEBUG nova.network.neutron [-] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:55:40 compute-0 nova_compute[194781]: 2025-10-02 19:55:40.851 2 INFO nova.compute.manager [-] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Took 2.49 seconds to deallocate network for instance.
Oct 02 19:55:40 compute-0 nova_compute[194781]: 2025-10-02 19:55:40.901 2 DEBUG oslo_concurrency.lockutils [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:40 compute-0 nova_compute[194781]: 2025-10-02 19:55:40.902 2 DEBUG oslo_concurrency.lockutils [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:40 compute-0 nova_compute[194781]: 2025-10-02 19:55:40.938 2 DEBUG nova.compute.manager [req-9410117f-1121-444d-8953-658d01d1c263 req-ebc47f38-d456-48c4-8540-671f44078002 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Received event network-vif-deleted-45b53db0-b1f5-401e-8a98-c127ada04a9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:55:41 compute-0 nova_compute[194781]: 2025-10-02 19:55:41.030 2 DEBUG nova.compute.provider_tree [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:55:41 compute-0 nova_compute[194781]: 2025-10-02 19:55:41.050 2 DEBUG nova.scheduler.client.report [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:55:41 compute-0 nova_compute[194781]: 2025-10-02 19:55:41.080 2 DEBUG oslo_concurrency.lockutils [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:41 compute-0 nova_compute[194781]: 2025-10-02 19:55:41.112 2 INFO nova.scheduler.client.report [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Deleted allocations for instance f0ac40ea-f3c9-4981-ba99-bfbf34bd253a
Oct 02 19:55:41 compute-0 nova_compute[194781]: 2025-10-02 19:55:41.205 2 DEBUG oslo_concurrency.lockutils [None req-2eeb773b-0d92-4c4b-8530-066312fa1260 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "f0ac40ea-f3c9-4981-ba99-bfbf34bd253a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:41 compute-0 nova_compute[194781]: 2025-10-02 19:55:41.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:42 compute-0 nova_compute[194781]: 2025-10-02 19:55:42.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:42 compute-0 nova_compute[194781]: 2025-10-02 19:55:42.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:55:42 compute-0 podman[265940]: 2025-10-02 19:55:42.753612952 +0000 UTC m=+0.110295305 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:55:42 compute-0 podman[265941]: 2025-10-02 19:55:42.777326685 +0000 UTC m=+0.130422815 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:55:43 compute-0 nova_compute[194781]: 2025-10-02 19:55:43.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:44 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:44.596 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:55:45 compute-0 nova_compute[194781]: 2025-10-02 19:55:45.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:45 compute-0 podman[265979]: 2025-10-02 19:55:45.750749326 +0000 UTC m=+0.111633519 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 19:55:45 compute-0 podman[265980]: 2025-10-02 19:55:45.825513101 +0000 UTC m=+0.189405992 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 19:55:46 compute-0 nova_compute[194781]: 2025-10-02 19:55:46.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:47.506 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:47.506 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:47.507 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.036 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.071 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.072 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.073 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.074 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.238 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.322 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.325 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.405 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.407 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.498 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.500 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.609 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.621 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.704 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.705 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.804 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.957 2 DEBUG oslo_concurrency.lockutils [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "ead9703a-68cd-4f65-a0dd-296c0a357b90" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.958 2 DEBUG oslo_concurrency.lockutils [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.958 2 DEBUG oslo_concurrency.lockutils [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.959 2 DEBUG oslo_concurrency.lockutils [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.959 2 DEBUG oslo_concurrency.lockutils [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.961 2 INFO nova.compute.manager [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Terminating instance
Oct 02 19:55:48 compute-0 nova_compute[194781]: 2025-10-02 19:55:48.963 2 DEBUG nova.compute.manager [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 19:55:49 compute-0 kernel: tap722eab1f-2c (unregistering): left promiscuous mode
Oct 02 19:55:49 compute-0 NetworkManager[52324]: <info>  [1759434949.0238] device (tap722eab1f-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 19:55:49 compute-0 ovn_controller[97052]: 2025-10-02T19:55:49Z|00196|binding|INFO|Releasing lport 722eab1f-2c73-4b59-9732-99ee52407450 from this chassis (sb_readonly=0)
Oct 02 19:55:49 compute-0 ovn_controller[97052]: 2025-10-02T19:55:49Z|00197|binding|INFO|Setting lport 722eab1f-2c73-4b59-9732-99ee52407450 down in Southbound
Oct 02 19:55:49 compute-0 ovn_controller[97052]: 2025-10-02T19:55:49Z|00198|binding|INFO|Removing iface tap722eab1f-2c ovn-installed in OVS
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.048 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:57:cd 10.100.0.62'], port_security=['fa:16:3e:c7:57:cd 10.100.0.62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.62/16', 'neutron:device_id': 'ead9703a-68cd-4f65-a0dd-296c0a357b90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3dae65399d7c47999282bff6664f6d16', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb16109a-6359-4dd8-bfae-0a7015239961', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31c9bff4-971d-41c4-a82c-3f2067f94d21, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=722eab1f-2c73-4b59-9732-99ee52407450) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.049 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 722eab1f-2c73-4b59-9732-99ee52407450 in datapath b8407621-6f3e-4864-b018-8cf0d0e8428e unbound from our chassis
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.051 105943 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8407621-6f3e-4864-b018-8cf0d0e8428e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.052 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[c41d3550-594f-4386-b118-286711c35dc4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.053 105943 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e namespace which is not needed anymore
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:49 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct 02 19:55:49 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 6min 47.968s CPU time.
Oct 02 19:55:49 compute-0 systemd-machined[154795]: Machine qemu-15-instance-0000000e terminated.
Oct 02 19:55:49 compute-0 kernel: tap722eab1f-2c: entered promiscuous mode
Oct 02 19:55:49 compute-0 kernel: tap722eab1f-2c (unregistering): left promiscuous mode
Oct 02 19:55:49 compute-0 NetworkManager[52324]: <info>  [1759434949.2119] manager: (tap722eab1f-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Oct 02 19:55:49 compute-0 ovn_controller[97052]: 2025-10-02T19:55:49Z|00199|binding|INFO|Claiming lport 722eab1f-2c73-4b59-9732-99ee52407450 for this chassis.
Oct 02 19:55:49 compute-0 ovn_controller[97052]: 2025-10-02T19:55:49Z|00200|binding|INFO|722eab1f-2c73-4b59-9732-99ee52407450: Claiming fa:16:3e:c7:57:cd 10.100.0.62
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.239 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:57:cd 10.100.0.62'], port_security=['fa:16:3e:c7:57:cd 10.100.0.62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.62/16', 'neutron:device_id': 'ead9703a-68cd-4f65-a0dd-296c0a357b90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3dae65399d7c47999282bff6664f6d16', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb16109a-6359-4dd8-bfae-0a7015239961', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31c9bff4-971d-41c4-a82c-3f2067f94d21, chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=722eab1f-2c73-4b59-9732-99ee52407450) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.256 2 DEBUG nova.compute.manager [req-d9b1b7fd-40d5-458b-b144-8bccef46d22b req-cd7994c1-5299-49d5-9d47-d2cbed716504 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Received event network-vif-unplugged-722eab1f-2c73-4b59-9732-99ee52407450 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.257 2 DEBUG oslo_concurrency.lockutils [req-d9b1b7fd-40d5-458b-b144-8bccef46d22b req-cd7994c1-5299-49d5-9d47-d2cbed716504 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.257 2 DEBUG oslo_concurrency.lockutils [req-d9b1b7fd-40d5-458b-b144-8bccef46d22b req-cd7994c1-5299-49d5-9d47-d2cbed716504 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.257 2 DEBUG oslo_concurrency.lockutils [req-d9b1b7fd-40d5-458b-b144-8bccef46d22b req-cd7994c1-5299-49d5-9d47-d2cbed716504 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.257 2 DEBUG nova.compute.manager [req-d9b1b7fd-40d5-458b-b144-8bccef46d22b req-cd7994c1-5299-49d5-9d47-d2cbed716504 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] No waiting events found dispatching network-vif-unplugged-722eab1f-2c73-4b59-9732-99ee52407450 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.257 2 DEBUG nova.compute.manager [req-d9b1b7fd-40d5-458b-b144-8bccef46d22b req-cd7994c1-5299-49d5-9d47-d2cbed716504 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Received event network-vif-unplugged-722eab1f-2c73-4b59-9732-99ee52407450 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:49 compute-0 ovn_controller[97052]: 2025-10-02T19:55:49Z|00201|binding|INFO|Setting lport 722eab1f-2c73-4b59-9732-99ee52407450 ovn-installed in OVS
Oct 02 19:55:49 compute-0 ovn_controller[97052]: 2025-10-02T19:55:49Z|00202|binding|INFO|Setting lport 722eab1f-2c73-4b59-9732-99ee52407450 up in Southbound
Oct 02 19:55:49 compute-0 ovn_controller[97052]: 2025-10-02T19:55:49Z|00203|binding|INFO|Releasing lport 722eab1f-2c73-4b59-9732-99ee52407450 from this chassis (sb_readonly=1)
Oct 02 19:55:49 compute-0 ovn_controller[97052]: 2025-10-02T19:55:49Z|00204|if_status|INFO|Not setting lport 722eab1f-2c73-4b59-9732-99ee52407450 down as sb is readonly
Oct 02 19:55:49 compute-0 ovn_controller[97052]: 2025-10-02T19:55:49Z|00205|binding|INFO|Removing iface tap722eab1f-2c ovn-installed in OVS
Oct 02 19:55:49 compute-0 ovn_controller[97052]: 2025-10-02T19:55:49Z|00206|binding|INFO|Releasing lport 722eab1f-2c73-4b59-9732-99ee52407450 from this chassis (sb_readonly=1)
Oct 02 19:55:49 compute-0 ovn_controller[97052]: 2025-10-02T19:55:49Z|00207|binding|INFO|Setting lport 722eab1f-2c73-4b59-9732-99ee52407450 down in Southbound
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.271 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:57:cd 10.100.0.62'], port_security=['fa:16:3e:c7:57:cd 10.100.0.62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.62/16', 'neutron:device_id': 'ead9703a-68cd-4f65-a0dd-296c0a357b90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3dae65399d7c47999282bff6664f6d16', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb16109a-6359-4dd8-bfae-0a7015239961', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31c9bff4-971d-41c4-a82c-3f2067f94d21, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>], logical_port=722eab1f-2c73-4b59-9732-99ee52407450) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0d77c2700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:49 compute-0 neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e[259625]: [NOTICE]   (259629) : haproxy version is 2.8.14-c23fe91
Oct 02 19:55:49 compute-0 neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e[259625]: [NOTICE]   (259629) : path to executable is /usr/sbin/haproxy
Oct 02 19:55:49 compute-0 neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e[259625]: [WARNING]  (259629) : Exiting Master process...
Oct 02 19:55:49 compute-0 neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e[259625]: [ALERT]    (259629) : Current worker (259631) exited with code 143 (Terminated)
Oct 02 19:55:49 compute-0 neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e[259625]: [WARNING]  (259629) : All workers exited. Exiting... (0)
Oct 02 19:55:49 compute-0 systemd[1]: libpod-598f63771c34ef8b26126b689a6108478c0c7bea94650a424cf64cadf13ebd10.scope: Deactivated successfully.
Oct 02 19:55:49 compute-0 podman[266066]: 2025-10-02 19:55:49.292758999 +0000 UTC m=+0.108523149 container died 598f63771c34ef8b26126b689a6108478c0c7bea94650a424cf64cadf13ebd10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.292 2 INFO nova.virt.libvirt.driver [-] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Instance destroyed successfully.
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.293 2 DEBUG nova.objects.instance [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lazy-loading 'resources' on Instance uuid ead9703a-68cd-4f65-a0dd-296c0a357b90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.313 2 DEBUG nova.virt.libvirt.vif [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T19:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-2850012-asg-2udtsluunakm-vpzg74q4wzzd-qo6zqhhuiuta',id=14,image_ref='b43dc593-d176-449d-a8d5-95d53b8e1b5e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T19:45:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='d4713e41-6620-49a4-8665-1b2fbe664d9c'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3dae65399d7c47999282bff6664f6d16',ramdisk_id='',reservation_id='r-04x7pqzf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='b43dc593-d176-449d-a8d5-95d53b8e1b5e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-732152950',owner_user_name='tempest-PrometheusGabbiTest-732152950-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T19:45:51Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='23b5415980f24bbbbfa331c702f6f7d9',uuid=ead9703a-68cd-4f65-a0dd-296c0a357b90,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "722eab1f-2c73-4b59-9732-99ee52407450", "address": "fa:16:3e:c7:57:cd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722eab1f-2c", "ovs_interfaceid": "722eab1f-2c73-4b59-9732-99ee52407450", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.313 2 DEBUG nova.network.os_vif_util [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Converting VIF {"id": "722eab1f-2c73-4b59-9732-99ee52407450", "address": "fa:16:3e:c7:57:cd", "network": {"id": "b8407621-6f3e-4864-b018-8cf0d0e8428e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3dae65399d7c47999282bff6664f6d16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap722eab1f-2c", "ovs_interfaceid": "722eab1f-2c73-4b59-9732-99ee52407450", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.314 2 DEBUG nova.network.os_vif_util [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:57:cd,bridge_name='br-int',has_traffic_filtering=True,id=722eab1f-2c73-4b59-9732-99ee52407450,network=Network(b8407621-6f3e-4864-b018-8cf0d0e8428e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap722eab1f-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.315 2 DEBUG os_vif [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:57:cd,bridge_name='br-int',has_traffic_filtering=True,id=722eab1f-2c73-4b59-9732-99ee52407450,network=Network(b8407621-6f3e-4864-b018-8cf0d0e8428e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap722eab1f-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.318 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap722eab1f-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.324 2 INFO os_vif [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:57:cd,bridge_name='br-int',has_traffic_filtering=True,id=722eab1f-2c73-4b59-9732-99ee52407450,network=Network(b8407621-6f3e-4864-b018-8cf0d0e8428e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap722eab1f-2c')
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.325 2 INFO nova.virt.libvirt.driver [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Deleting instance files /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90_del
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.326 2 INFO nova.virt.libvirt.driver [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Deletion of /var/lib/nova/instances/ead9703a-68cd-4f65-a0dd-296c0a357b90_del complete
Oct 02 19:55:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-598f63771c34ef8b26126b689a6108478c0c7bea94650a424cf64cadf13ebd10-userdata-shm.mount: Deactivated successfully.
Oct 02 19:55:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1189b86e3b7293ab1cf09fb5a8365c543d53b9dc719a4e9a0da0c0154f63f1b-merged.mount: Deactivated successfully.
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.377 2 INFO nova.compute.manager [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Took 0.41 seconds to destroy the instance on the hypervisor.
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.378 2 DEBUG oslo.service.loopingcall [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.379 2 DEBUG nova.compute.manager [-] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.379 2 DEBUG nova.network.neutron [-] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 19:55:49 compute-0 podman[266066]: 2025-10-02 19:55:49.380087539 +0000 UTC m=+0.195851699 container cleanup 598f63771c34ef8b26126b689a6108478c0c7bea94650a424cf64cadf13ebd10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:55:49 compute-0 systemd[1]: libpod-conmon-598f63771c34ef8b26126b689a6108478c0c7bea94650a424cf64cadf13ebd10.scope: Deactivated successfully.
Oct 02 19:55:49 compute-0 podman[266110]: 2025-10-02 19:55:49.508140342 +0000 UTC m=+0.080439622 container remove 598f63771c34ef8b26126b689a6108478c0c7bea94650a424cf64cadf13ebd10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.508 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.510 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4865MB free_disk=72.37329483032227GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.510 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.510 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.517 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[6b41522b-ae51-4877-8e0f-ee058848d588]: (4, ('Thu Oct  2 07:55:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e (598f63771c34ef8b26126b689a6108478c0c7bea94650a424cf64cadf13ebd10)\n598f63771c34ef8b26126b689a6108478c0c7bea94650a424cf64cadf13ebd10\nThu Oct  2 07:55:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e (598f63771c34ef8b26126b689a6108478c0c7bea94650a424cf64cadf13ebd10)\n598f63771c34ef8b26126b689a6108478c0c7bea94650a424cf64cadf13ebd10\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.521 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[80e90a9a-4511-42c5-9ad7-551afcaba97d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.522 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8407621-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:49 compute-0 kernel: tapb8407621-60: left promiscuous mode
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.546 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc8584a-8a26-4a07-87fd-f4975a7c4f51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.576 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[fcbd5fa6-cb70-4722-b783-315026b4597c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.577 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9b1d79-0f82-404e-af95-3fcc95573bb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.595 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9a3ff7-9cd3-40ac-bbe0-38be16530d24]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535286, 'reachable_time': 37323, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266126, 'error': None, 'target': 'ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.598 106060 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b8407621-6f3e-4864-b018-8cf0d0e8428e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.599 106060 DEBUG oslo.privsep.daemon [-] privsep: reply[f98ca3cb-ddbb-40fa-a14b-f5d8ec707afc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.600 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 722eab1f-2c73-4b59-9732-99ee52407450 in datapath b8407621-6f3e-4864-b018-8cf0d0e8428e unbound from our chassis
Oct 02 19:55:49 compute-0 systemd[1]: run-netns-ovnmeta\x2db8407621\x2d6f3e\x2d4864\x2db018\x2d8cf0d0e8428e.mount: Deactivated successfully.
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.603 105943 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8407621-6f3e-4864-b018-8cf0d0e8428e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.604 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1e2e73-b64b-452e-b922-e14193ce8c20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.606 105943 INFO neutron.agent.ovn.metadata.agent [-] Port 722eab1f-2c73-4b59-9732-99ee52407450 in datapath b8407621-6f3e-4864-b018-8cf0d0e8428e unbound from our chassis
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.608 105943 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8407621-6f3e-4864-b018-8cf0d0e8428e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 19:55:49 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:55:49.609 246899 DEBUG oslo.privsep.daemon [-] privsep: reply[163cc035-8e4a-452e-b8aa-3be6b27c543f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.715 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.716 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance ead9703a-68cd-4f65-a0dd-296c0a357b90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.716 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.717 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.779 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing inventories for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.838 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating ProviderTree inventory for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.839 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.859 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing aggregate associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.878 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing trait associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,HW_CPU_X86_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.945 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:55:49 compute-0 nova_compute[194781]: 2025-10-02 19:55:49.979 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:55:50 compute-0 nova_compute[194781]: 2025-10-02 19:55:50.009 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:55:50 compute-0 nova_compute[194781]: 2025-10-02 19:55:50.010 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:50 compute-0 nova_compute[194781]: 2025-10-02 19:55:50.083 2 DEBUG nova.network.neutron [-] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:55:50 compute-0 nova_compute[194781]: 2025-10-02 19:55:50.104 2 INFO nova.compute.manager [-] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Took 0.72 seconds to deallocate network for instance.
Oct 02 19:55:50 compute-0 nova_compute[194781]: 2025-10-02 19:55:50.155 2 DEBUG oslo_concurrency.lockutils [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:50 compute-0 nova_compute[194781]: 2025-10-02 19:55:50.156 2 DEBUG oslo_concurrency.lockutils [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:50 compute-0 nova_compute[194781]: 2025-10-02 19:55:50.177 2 DEBUG nova.compute.manager [req-f668113d-6bb3-4fe9-b105-585d133c6109 req-c001fc5f-c262-467f-8efb-0a23dd194921 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Received event network-vif-deleted-722eab1f-2c73-4b59-9732-99ee52407450 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:55:50 compute-0 nova_compute[194781]: 2025-10-02 19:55:50.244 2 DEBUG nova.compute.provider_tree [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:55:50 compute-0 nova_compute[194781]: 2025-10-02 19:55:50.262 2 DEBUG nova.scheduler.client.report [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:55:50 compute-0 nova_compute[194781]: 2025-10-02 19:55:50.288 2 DEBUG oslo_concurrency.lockutils [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:50 compute-0 nova_compute[194781]: 2025-10-02 19:55:50.324 2 INFO nova.scheduler.client.report [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Deleted allocations for instance ead9703a-68cd-4f65-a0dd-296c0a357b90
Oct 02 19:55:50 compute-0 nova_compute[194781]: 2025-10-02 19:55:50.386 2 DEBUG oslo_concurrency.lockutils [None req-cc233e13-e00d-4c29-bb87-0bf771655937 23b5415980f24bbbbfa331c702f6f7d9 3dae65399d7c47999282bff6664f6d16 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.361 2 DEBUG nova.compute.manager [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Received event network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.362 2 DEBUG oslo_concurrency.lockutils [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.363 2 DEBUG oslo_concurrency.lockutils [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.363 2 DEBUG oslo_concurrency.lockutils [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.364 2 DEBUG nova.compute.manager [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] No waiting events found dispatching network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.364 2 WARNING nova.compute.manager [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Received unexpected event network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 for instance with vm_state deleted and task_state None.
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.364 2 DEBUG nova.compute.manager [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Received event network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.365 2 DEBUG oslo_concurrency.lockutils [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.365 2 DEBUG oslo_concurrency.lockutils [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.366 2 DEBUG oslo_concurrency.lockutils [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.366 2 DEBUG nova.compute.manager [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] No waiting events found dispatching network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.367 2 WARNING nova.compute.manager [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Received unexpected event network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 for instance with vm_state deleted and task_state None.
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.367 2 DEBUG nova.compute.manager [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Received event network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.367 2 DEBUG oslo_concurrency.lockutils [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Acquiring lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.368 2 DEBUG oslo_concurrency.lockutils [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.368 2 DEBUG oslo_concurrency.lockutils [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] Lock "ead9703a-68cd-4f65-a0dd-296c0a357b90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.368 2 DEBUG nova.compute.manager [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] No waiting events found dispatching network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.369 2 WARNING nova.compute.manager [req-2e4ecea7-3c1d-4caa-91c7-a2d7a7bbfa4c req-08d470d1-34fb-454c-954a-5a25a08a25e7 fccf17fa55b145b981767f276e754bfd cb8f38b023494646a194522604dffae9 - - default default] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Received unexpected event network-vif-plugged-722eab1f-2c73-4b59-9732-99ee52407450 for instance with vm_state deleted and task_state None.
Oct 02 19:55:51 compute-0 nova_compute[194781]: 2025-10-02 19:55:51.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:51 compute-0 podman[266127]: 2025-10-02 19:55:51.759225905 +0000 UTC m=+0.122232393 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:55:52 compute-0 nova_compute[194781]: 2025-10-02 19:55:52.009 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:52 compute-0 nova_compute[194781]: 2025-10-02 19:55:52.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:53 compute-0 nova_compute[194781]: 2025-10-02 19:55:53.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:53 compute-0 nova_compute[194781]: 2025-10-02 19:55:53.129 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759434938.1277983, f0ac40ea-f3c9-4981-ba99-bfbf34bd253a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:55:53 compute-0 nova_compute[194781]: 2025-10-02 19:55:53.129 2 INFO nova.compute.manager [-] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] VM Stopped (Lifecycle Event)
Oct 02 19:55:53 compute-0 nova_compute[194781]: 2025-10-02 19:55:53.156 2 DEBUG nova.compute.manager [None req-740b899b-1c43-46d9-ad37-4b1ed9e8d743 - - - - - -] [instance: f0ac40ea-f3c9-4981-ba99-bfbf34bd253a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:55:54 compute-0 nova_compute[194781]: 2025-10-02 19:55:54.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:56 compute-0 nova_compute[194781]: 2025-10-02 19:55:56.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:57 compute-0 nova_compute[194781]: 2025-10-02 19:55:57.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:58 compute-0 nova_compute[194781]: 2025-10-02 19:55:58.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:55:58 compute-0 nova_compute[194781]: 2025-10-02 19:55:58.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:55:58 compute-0 nova_compute[194781]: 2025-10-02 19:55:58.056 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 19:55:59 compute-0 nova_compute[194781]: 2025-10-02 19:55:59.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:55:59 compute-0 podman[209015]: time="2025-10-02T19:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:55:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:55:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5227 "" "Go-http-client/1.1"
Oct 02 19:56:01 compute-0 openstack_network_exporter[211160]: ERROR   19:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:56:01 compute-0 openstack_network_exporter[211160]: ERROR   19:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:56:01 compute-0 openstack_network_exporter[211160]: ERROR   19:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:56:01 compute-0 openstack_network_exporter[211160]: ERROR   19:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:56:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:56:01 compute-0 openstack_network_exporter[211160]: ERROR   19:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:56:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:56:01 compute-0 nova_compute[194781]: 2025-10-02 19:56:01.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:02 compute-0 podman[266152]: 2025-10-02 19:56:02.772105143 +0000 UTC m=+0.132257383 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct 02 19:56:02 compute-0 podman[266151]: 2025-10-02 19:56:02.7963289 +0000 UTC m=+0.155537565 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct 02 19:56:04 compute-0 nova_compute[194781]: 2025-10-02 19:56:04.289 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759434949.2865086, ead9703a-68cd-4f65-a0dd-296c0a357b90 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 19:56:04 compute-0 nova_compute[194781]: 2025-10-02 19:56:04.289 2 INFO nova.compute.manager [-] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] VM Stopped (Lifecycle Event)
Oct 02 19:56:04 compute-0 nova_compute[194781]: 2025-10-02 19:56:04.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:04 compute-0 nova_compute[194781]: 2025-10-02 19:56:04.430 2 DEBUG nova.compute.manager [None req-eccbc11d-1ae4-47f4-a23b-dc8bb57e597b - - - - - -] [instance: ead9703a-68cd-4f65-a0dd-296c0a357b90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 19:56:04 compute-0 podman[266189]: 2025-10-02 19:56:04.739052174 +0000 UTC m=+0.105833979 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, build-date=2024-09-18T21:23:30, release=1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, name=ubi9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, config_id=edpm, container_name=kepler, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct 02 19:56:04 compute-0 podman[266190]: 2025-10-02 19:56:04.75088686 +0000 UTC m=+0.102548764 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:56:04 compute-0 podman[266188]: 2025-10-02 19:56:04.775117647 +0000 UTC m=+0.133259149 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 02 19:56:06 compute-0 ovn_controller[97052]: 2025-10-02T19:56:06Z|00208|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:56:06 compute-0 nova_compute[194781]: 2025-10-02 19:56:06.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:06 compute-0 nova_compute[194781]: 2025-10-02 19:56:06.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:08 compute-0 nova_compute[194781]: 2025-10-02 19:56:08.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:08 compute-0 nova_compute[194781]: 2025-10-02 19:56:08.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 19:56:08 compute-0 nova_compute[194781]: 2025-10-02 19:56:08.053 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 19:56:09 compute-0 nova_compute[194781]: 2025-10-02 19:56:09.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:09 compute-0 nova_compute[194781]: 2025-10-02 19:56:09.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 19:56:09 compute-0 nova_compute[194781]: 2025-10-02 19:56:09.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:10 compute-0 ovn_controller[97052]: 2025-10-02T19:56:10Z|00209|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:56:10 compute-0 nova_compute[194781]: 2025-10-02 19:56:10.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:11 compute-0 nova_compute[194781]: 2025-10-02 19:56:11.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:13 compute-0 podman[266243]: 2025-10-02 19:56:13.761853071 +0000 UTC m=+0.116587517 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 19:56:13 compute-0 podman[266242]: 2025-10-02 19:56:13.795401499 +0000 UTC m=+0.156926431 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:56:14 compute-0 nova_compute[194781]: 2025-10-02 19:56:14.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:14 compute-0 nova_compute[194781]: 2025-10-02 19:56:14.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:16 compute-0 nova_compute[194781]: 2025-10-02 19:56:16.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:16 compute-0 podman[266280]: 2025-10-02 19:56:16.780138503 +0000 UTC m=+0.140793984 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 19:56:16 compute-0 podman[266281]: 2025-10-02 19:56:16.829283325 +0000 UTC m=+0.189545186 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 19:56:17 compute-0 ovn_controller[97052]: 2025-10-02T19:56:17Z|00210|binding|INFO|Releasing lport 8a91c2ef-c369-46ce-8154-e9505f04ef0c from this chassis (sb_readonly=0)
Oct 02 19:56:17 compute-0 nova_compute[194781]: 2025-10-02 19:56:17.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:19 compute-0 nova_compute[194781]: 2025-10-02 19:56:19.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:21 compute-0 nova_compute[194781]: 2025-10-02 19:56:21.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:22 compute-0 podman[266325]: 2025-10-02 19:56:22.760943004 +0000 UTC m=+0.114212866 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:56:24 compute-0 nova_compute[194781]: 2025-10-02 19:56:24.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:26 compute-0 nova_compute[194781]: 2025-10-02 19:56:26.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:29 compute-0 nova_compute[194781]: 2025-10-02 19:56:29.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:29 compute-0 podman[209015]: time="2025-10-02T19:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:56:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:56:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5227 "" "Go-http-client/1.1"
Oct 02 19:56:31 compute-0 openstack_network_exporter[211160]: ERROR   19:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:56:31 compute-0 openstack_network_exporter[211160]: ERROR   19:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:56:31 compute-0 openstack_network_exporter[211160]: ERROR   19:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:56:31 compute-0 openstack_network_exporter[211160]: ERROR   19:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:56:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:56:31 compute-0 openstack_network_exporter[211160]: ERROR   19:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:56:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:56:31 compute-0 nova_compute[194781]: 2025-10-02 19:56:31.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:33 compute-0 podman[266349]: 2025-10-02 19:56:33.767600959 +0000 UTC m=+0.124883942 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct 02 19:56:33 compute-0 podman[266350]: 2025-10-02 19:56:33.78078858 +0000 UTC m=+0.140065315 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:56:33 compute-0 unix_chkpwd[266386]: password check failed for user (root)
Oct 02 19:56:33 compute-0 sshd-session[266347]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 02 19:56:34 compute-0 nova_compute[194781]: 2025-10-02 19:56:34.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:35 compute-0 sshd-session[266347]: Failed password for root from 193.46.255.103 port 17054 ssh2
Oct 02 19:56:35 compute-0 unix_chkpwd[266387]: password check failed for user (root)
Oct 02 19:56:35 compute-0 podman[266389]: 2025-10-02 19:56:35.750161374 +0000 UTC m=+0.111375613 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_id=edpm, managed_by=edpm_ansible, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, version=9.4, architecture=x86_64, io.buildah.version=1.29.0)
Oct 02 19:56:35 compute-0 podman[266388]: 2025-10-02 19:56:35.761375154 +0000 UTC m=+0.128351942 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, vcs-type=git, release=1755695350, com.redhat.component=ubi9-minimal-container)
Oct 02 19:56:35 compute-0 podman[266390]: 2025-10-02 19:56:35.789765778 +0000 UTC m=+0.136510833 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001)
Oct 02 19:56:36 compute-0 nova_compute[194781]: 2025-10-02 19:56:36.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:37 compute-0 sshd-session[266347]: Failed password for root from 193.46.255.103 port 17054 ssh2
Oct 02 19:56:38 compute-0 unix_chkpwd[266445]: password check failed for user (root)
Oct 02 19:56:39 compute-0 nova_compute[194781]: 2025-10-02 19:56:39.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:40 compute-0 sshd-session[266347]: Failed password for root from 193.46.255.103 port 17054 ssh2
Oct 02 19:56:41 compute-0 nova_compute[194781]: 2025-10-02 19:56:41.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:41 compute-0 sshd-session[266347]: Received disconnect from 193.46.255.103 port 17054:11:  [preauth]
Oct 02 19:56:41 compute-0 sshd-session[266347]: Disconnected from authenticating user root 193.46.255.103 port 17054 [preauth]
Oct 02 19:56:41 compute-0 sshd-session[266347]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 02 19:56:42 compute-0 unix_chkpwd[266448]: password check failed for user (root)
Oct 02 19:56:42 compute-0 sshd-session[266446]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 02 19:56:43 compute-0 nova_compute[194781]: 2025-10-02 19:56:43.046 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:43 compute-0 nova_compute[194781]: 2025-10-02 19:56:43.047 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:56:44 compute-0 sshd-session[266446]: Failed password for root from 193.46.255.103 port 17068 ssh2
Oct 02 19:56:44 compute-0 nova_compute[194781]: 2025-10-02 19:56:44.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:44 compute-0 podman[266450]: 2025-10-02 19:56:44.737676088 +0000 UTC m=+0.101071016 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 19:56:44 compute-0 podman[266449]: 2025-10-02 19:56:44.745749137 +0000 UTC m=+0.107664456 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:56:45 compute-0 nova_compute[194781]: 2025-10-02 19:56:45.037 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:45 compute-0 unix_chkpwd[266490]: password check failed for user (root)
Oct 02 19:56:46 compute-0 nova_compute[194781]: 2025-10-02 19:56:46.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:47 compute-0 sshd-session[266446]: Failed password for root from 193.46.255.103 port 17068 ssh2
Oct 02 19:56:47 compute-0 unix_chkpwd[266491]: password check failed for user (root)
Oct 02 19:56:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:56:47.507 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:56:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:56:47.508 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:56:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:56:47.509 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:56:47 compute-0 podman[266492]: 2025-10-02 19:56:47.755385546 +0000 UTC m=+0.116594267 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:56:47 compute-0 podman[266493]: 2025-10-02 19:56:47.831391333 +0000 UTC m=+0.198873447 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Oct 02 19:56:48 compute-0 nova_compute[194781]: 2025-10-02 19:56:48.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:48 compute-0 ovn_controller[97052]: 2025-10-02T19:56:48Z|00211|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 02 19:56:49 compute-0 nova_compute[194781]: 2025-10-02 19:56:49.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:49 compute-0 nova_compute[194781]: 2025-10-02 19:56:49.078 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:56:49 compute-0 nova_compute[194781]: 2025-10-02 19:56:49.079 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:56:49 compute-0 nova_compute[194781]: 2025-10-02 19:56:49.079 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:56:49 compute-0 nova_compute[194781]: 2025-10-02 19:56:49.080 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:56:49 compute-0 nova_compute[194781]: 2025-10-02 19:56:49.196 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:56:49 compute-0 nova_compute[194781]: 2025-10-02 19:56:49.309 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:56:49 compute-0 nova_compute[194781]: 2025-10-02 19:56:49.310 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:56:49 compute-0 nova_compute[194781]: 2025-10-02 19:56:49.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:49 compute-0 nova_compute[194781]: 2025-10-02 19:56:49.408 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:56:49 compute-0 nova_compute[194781]: 2025-10-02 19:56:49.410 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:56:49 compute-0 sshd-session[266446]: Failed password for root from 193.46.255.103 port 17068 ssh2
Oct 02 19:56:49 compute-0 nova_compute[194781]: 2025-10-02 19:56:49.490 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:56:49 compute-0 nova_compute[194781]: 2025-10-02 19:56:49.492 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:56:49 compute-0 nova_compute[194781]: 2025-10-02 19:56:49.593 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:56:50 compute-0 nova_compute[194781]: 2025-10-02 19:56:50.047 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:56:50 compute-0 nova_compute[194781]: 2025-10-02 19:56:50.049 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5065MB free_disk=72.40198516845703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:56:50 compute-0 nova_compute[194781]: 2025-10-02 19:56:50.049 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:56:50 compute-0 nova_compute[194781]: 2025-10-02 19:56:50.049 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:56:50 compute-0 nova_compute[194781]: 2025-10-02 19:56:50.135 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:56:50 compute-0 nova_compute[194781]: 2025-10-02 19:56:50.136 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:56:50 compute-0 nova_compute[194781]: 2025-10-02 19:56:50.136 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:56:50 compute-0 nova_compute[194781]: 2025-10-02 19:56:50.181 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:56:50 compute-0 nova_compute[194781]: 2025-10-02 19:56:50.199 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:56:50 compute-0 nova_compute[194781]: 2025-10-02 19:56:50.228 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:56:50 compute-0 nova_compute[194781]: 2025-10-02 19:56:50.229 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:56:50 compute-0 sshd-session[266446]: Received disconnect from 193.46.255.103 port 17068:11:  [preauth]
Oct 02 19:56:50 compute-0 sshd-session[266446]: Disconnected from authenticating user root 193.46.255.103 port 17068 [preauth]
Oct 02 19:56:50 compute-0 sshd-session[266446]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 02 19:56:51 compute-0 nova_compute[194781]: 2025-10-02 19:56:51.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:51 compute-0 unix_chkpwd[266552]: password check failed for user (root)
Oct 02 19:56:51 compute-0 sshd-session[266550]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 02 19:56:53 compute-0 nova_compute[194781]: 2025-10-02 19:56:53.229 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:53 compute-0 sshd-session[266550]: Failed password for root from 193.46.255.103 port 20144 ssh2
Oct 02 19:56:53 compute-0 podman[266553]: 2025-10-02 19:56:53.767988481 +0000 UTC m=+0.141343318 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:56:54 compute-0 nova_compute[194781]: 2025-10-02 19:56:54.030 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:54 compute-0 nova_compute[194781]: 2025-10-02 19:56:54.032 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:54 compute-0 nova_compute[194781]: 2025-10-02 19:56:54.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:54 compute-0 unix_chkpwd[266576]: password check failed for user (root)
Oct 02 19:56:56 compute-0 sshd-session[266550]: Failed password for root from 193.46.255.103 port 20144 ssh2
Oct 02 19:56:56 compute-0 nova_compute[194781]: 2025-10-02 19:56:56.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:57 compute-0 nova_compute[194781]: 2025-10-02 19:56:57.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:57 compute-0 unix_chkpwd[266577]: password check failed for user (root)
Oct 02 19:56:58 compute-0 nova_compute[194781]: 2025-10-02 19:56:58.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:56:58 compute-0 nova_compute[194781]: 2025-10-02 19:56:58.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:56:58 compute-0 nova_compute[194781]: 2025-10-02 19:56:58.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:56:58 compute-0 nova_compute[194781]: 2025-10-02 19:56:58.619 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:56:58 compute-0 nova_compute[194781]: 2025-10-02 19:56:58.620 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:56:58 compute-0 nova_compute[194781]: 2025-10-02 19:56:58.620 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:56:58 compute-0 nova_compute[194781]: 2025-10-02 19:56:58.621 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:56:59 compute-0 sshd-session[266550]: Failed password for root from 193.46.255.103 port 20144 ssh2
Oct 02 19:56:59 compute-0 nova_compute[194781]: 2025-10-02 19:56:59.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:56:59 compute-0 sshd-session[266550]: Received disconnect from 193.46.255.103 port 20144:11:  [preauth]
Oct 02 19:56:59 compute-0 sshd-session[266550]: Disconnected from authenticating user root 193.46.255.103 port 20144 [preauth]
Oct 02 19:56:59 compute-0 sshd-session[266550]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 02 19:56:59 compute-0 podman[209015]: time="2025-10-02T19:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:56:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:56:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5229 "" "Go-http-client/1.1"
Oct 02 19:56:59 compute-0 nova_compute[194781]: 2025-10-02 19:56:59.913 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:56:59 compute-0 nova_compute[194781]: 2025-10-02 19:56:59.929 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:56:59 compute-0 nova_compute[194781]: 2025-10-02 19:56:59.930 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:57:01 compute-0 openstack_network_exporter[211160]: ERROR   19:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:57:01 compute-0 openstack_network_exporter[211160]: ERROR   19:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:57:01 compute-0 openstack_network_exporter[211160]: ERROR   19:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:57:01 compute-0 openstack_network_exporter[211160]: ERROR   19:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:57:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:57:01 compute-0 openstack_network_exporter[211160]: ERROR   19:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:57:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:57:01 compute-0 nova_compute[194781]: 2025-10-02 19:57:01.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:01 compute-0 nova_compute[194781]: 2025-10-02 19:57:01.925 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:04 compute-0 nova_compute[194781]: 2025-10-02 19:57:04.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:04 compute-0 podman[266578]: 2025-10-02 19:57:04.786942335 +0000 UTC m=+0.151227733 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:57:04 compute-0 podman[266579]: 2025-10-02 19:57:04.790782865 +0000 UTC m=+0.149448258 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:57:06 compute-0 nova_compute[194781]: 2025-10-02 19:57:06.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:06 compute-0 podman[266616]: 2025-10-02 19:57:06.753419134 +0000 UTC m=+0.119351139 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vcs-type=git)
Oct 02 19:57:06 compute-0 podman[266617]: 2025-10-02 19:57:06.754478132 +0000 UTC m=+0.118388055 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, container_name=kepler, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9)
Oct 02 19:57:06 compute-0 podman[266618]: 2025-10-02 19:57:06.753038014 +0000 UTC m=+0.099683190 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd)
Oct 02 19:57:09 compute-0 nova_compute[194781]: 2025-10-02 19:57:09.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:11 compute-0 nova_compute[194781]: 2025-10-02 19:57:11.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.950 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.951 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdba68a23f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.964 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.964 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.964 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.964 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.964 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:12.965 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:57:12.964876) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.009 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 64860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.010 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.010 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.012 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:57:13.010697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:57:13.012274) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.017 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.017 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.018 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.018 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.019 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.019 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.020 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:57:13.018278) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.021 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:57:13.019274) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:57:13.020299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.021 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.022 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:57:13.021520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.022 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:57:13.022920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.100 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.101 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.101 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.102 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.102 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.102 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.102 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.103 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.103 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.103 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.103 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.103 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.103 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.104 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.104 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.104 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:57:13.102613) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:57:13.103694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:57:13.104746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.143 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.144 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.144 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.145 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.145 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.146 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.146 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.147 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.147 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:57:13.146777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.148 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.148 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.149 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.149 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.149 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.149 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.150 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.150 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.150 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.151 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.151 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.152 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.153 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.154 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.154 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:57:13.150711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.154 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.155 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.155 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.155 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.156 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:57:13.155275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.157 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.157 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.157 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.158 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.158 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.158 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:57:13.158237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.159 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.160 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.161 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.162 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.163 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.164 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:57:13.162394) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.164 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.165 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.165 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.165 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.166 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.166 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.166 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.166 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.167 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.168 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.169 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.169 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.170 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.170 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:57:13.166558) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.170 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.170 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.170 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.171 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.171 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.172 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.172 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.173 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:57:13.170804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.173 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.173 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.173 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.173 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.174 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:57:13.173610) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.174 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.174 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.175 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.175 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.175 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.176 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.176 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.176 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:57:13.176230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.176 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.177 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.177 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.177 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.177 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.178 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.178 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.178 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:57:13.177749) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.179 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.180 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:57:13.179250) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.180 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.180 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.180 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.181 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:57:13.180680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.182 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.183 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:57:13.182435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:57:13.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:57:14 compute-0 nova_compute[194781]: 2025-10-02 19:57:14.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:15 compute-0 podman[266673]: 2025-10-02 19:57:15.762736094 +0000 UTC m=+0.121228597 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:57:15 compute-0 podman[266674]: 2025-10-02 19:57:15.797898554 +0000 UTC m=+0.151075900 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:57:16 compute-0 nova_compute[194781]: 2025-10-02 19:57:16.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:18 compute-0 podman[266714]: 2025-10-02 19:57:18.685116625 +0000 UTC m=+0.061750209 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 19:57:18 compute-0 podman[266715]: 2025-10-02 19:57:18.730170241 +0000 UTC m=+0.102482843 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:57:19 compute-0 sshd-session[266758]: Accepted publickey for zuul from 192.168.122.10 port 52898 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 19:57:19 compute-0 systemd-logind[798]: New session 33 of user zuul.
Oct 02 19:57:19 compute-0 systemd[1]: Started Session 33 of User zuul.
Oct 02 19:57:19 compute-0 sshd-session[266758]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:57:19 compute-0 sudo[266762]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 02 19:57:19 compute-0 sudo[266762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:57:19 compute-0 nova_compute[194781]: 2025-10-02 19:57:19.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:21 compute-0 nova_compute[194781]: 2025-10-02 19:57:21.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:24 compute-0 podman[266907]: 2025-10-02 19:57:24.359440117 +0000 UTC m=+0.145345671 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:57:24 compute-0 nova_compute[194781]: 2025-10-02 19:57:24.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:25 compute-0 ovs-vsctl[266963]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 02 19:57:26 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 266786 (sos)
Oct 02 19:57:26 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 02 19:57:26 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 02 19:57:26 compute-0 nova_compute[194781]: 2025-10-02 19:57:26.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:27 compute-0 virtqemud[194432]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 02 19:57:27 compute-0 virtqemud[194432]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 02 19:57:27 compute-0 virtqemud[194432]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 19:57:28 compute-0 kernel: block vda: the capability attribute has been deprecated.
Oct 02 19:57:29 compute-0 crontab[267411]: (root) LIST (root)
Oct 02 19:57:29 compute-0 nova_compute[194781]: 2025-10-02 19:57:29.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:29 compute-0 podman[209015]: time="2025-10-02T19:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:57:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:57:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5228 "" "Go-http-client/1.1"
Oct 02 19:57:31 compute-0 openstack_network_exporter[211160]: ERROR   19:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:57:31 compute-0 openstack_network_exporter[211160]: ERROR   19:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:57:31 compute-0 openstack_network_exporter[211160]: ERROR   19:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:57:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:57:31 compute-0 openstack_network_exporter[211160]: ERROR   19:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:57:31 compute-0 openstack_network_exporter[211160]: ERROR   19:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:57:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:57:31 compute-0 nova_compute[194781]: 2025-10-02 19:57:31.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:32 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 19:57:32 compute-0 systemd[1]: Started Hostname Service.
Oct 02 19:57:34 compute-0 nova_compute[194781]: 2025-10-02 19:57:34.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:35 compute-0 podman[267684]: 2025-10-02 19:57:35.00190833 +0000 UTC m=+0.115648494 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:57:35 compute-0 podman[267686]: 2025-10-02 19:57:35.005882912 +0000 UTC m=+0.119619725 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Oct 02 19:57:36 compute-0 nova_compute[194781]: 2025-10-02 19:57:36.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:37 compute-0 podman[267923]: 2025-10-02 19:57:37.750847093 +0000 UTC m=+0.117587834 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, release=1214.1726694543, vcs-type=git, name=ubi9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4)
Oct 02 19:57:37 compute-0 podman[267922]: 2025-10-02 19:57:37.754773764 +0000 UTC m=+0.117664155 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.6)
Oct 02 19:57:37 compute-0 podman[267924]: 2025-10-02 19:57:37.778435386 +0000 UTC m=+0.134930872 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 19:57:39 compute-0 nova_compute[194781]: 2025-10-02 19:57:39.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:41 compute-0 nova_compute[194781]: 2025-10-02 19:57:41.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:41 compute-0 ovs-appctl[268659]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 19:57:41 compute-0 ovs-appctl[268673]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 19:57:41 compute-0 ovs-appctl[268684]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 19:57:44 compute-0 nova_compute[194781]: 2025-10-02 19:57:44.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:44 compute-0 nova_compute[194781]: 2025-10-02 19:57:44.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:57:44 compute-0 nova_compute[194781]: 2025-10-02 19:57:44.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:45 compute-0 nova_compute[194781]: 2025-10-02 19:57:45.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:46 compute-0 nova_compute[194781]: 2025-10-02 19:57:46.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:46 compute-0 podman[269843]: 2025-10-02 19:57:46.769602955 +0000 UTC m=+0.128691370 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible)
Oct 02 19:57:46 compute-0 podman[269840]: 2025-10-02 19:57:46.783890855 +0000 UTC m=+0.140947918 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:57:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:57:47.508 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:57:47.509 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:57:47.509 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:49 compute-0 nova_compute[194781]: 2025-10-02 19:57:49.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:49 compute-0 nova_compute[194781]: 2025-10-02 19:57:49.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:49 compute-0 podman[270056]: 2025-10-02 19:57:49.798409369 +0000 UTC m=+0.150364891 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 19:57:49 compute-0 podman[270057]: 2025-10-02 19:57:49.852848088 +0000 UTC m=+0.201377361 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.082 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.082 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.083 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.084 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.192 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.277 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.279 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.348 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.350 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.427 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.429 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.492 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.938 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.941 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4837MB free_disk=72.02416610717773GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.942 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:57:50 compute-0 nova_compute[194781]: 2025-10-02 19:57:50.942 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:57:51 compute-0 nova_compute[194781]: 2025-10-02 19:57:51.128 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:57:51 compute-0 nova_compute[194781]: 2025-10-02 19:57:51.130 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:57:51 compute-0 nova_compute[194781]: 2025-10-02 19:57:51.130 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:57:51 compute-0 nova_compute[194781]: 2025-10-02 19:57:51.235 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:57:51 compute-0 nova_compute[194781]: 2025-10-02 19:57:51.264 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:57:51 compute-0 nova_compute[194781]: 2025-10-02 19:57:51.266 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:57:51 compute-0 nova_compute[194781]: 2025-10-02 19:57:51.266 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:57:51 compute-0 nova_compute[194781]: 2025-10-02 19:57:51.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:54 compute-0 nova_compute[194781]: 2025-10-02 19:57:54.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:54 compute-0 podman[270165]: 2025-10-02 19:57:54.527531325 +0000 UTC m=+0.104615218 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 19:57:55 compute-0 nova_compute[194781]: 2025-10-02 19:57:55.268 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:55 compute-0 nova_compute[194781]: 2025-10-02 19:57:55.268 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:55 compute-0 nova_compute[194781]: 2025-10-02 19:57:55.269 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:55 compute-0 virtqemud[194432]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 19:57:56 compute-0 nova_compute[194781]: 2025-10-02 19:57:56.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:57 compute-0 nova_compute[194781]: 2025-10-02 19:57:57.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:57 compute-0 systemd[1]: Starting Time & Date Service...
Oct 02 19:57:58 compute-0 systemd[1]: Started Time & Date Service.
Oct 02 19:57:59 compute-0 nova_compute[194781]: 2025-10-02 19:57:59.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:57:59 compute-0 nova_compute[194781]: 2025-10-02 19:57:59.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:57:59 compute-0 nova_compute[194781]: 2025-10-02 19:57:59.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:57:59 compute-0 nova_compute[194781]: 2025-10-02 19:57:59.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:57:59 compute-0 nova_compute[194781]: 2025-10-02 19:57:59.697 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:57:59 compute-0 nova_compute[194781]: 2025-10-02 19:57:59.698 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:57:59 compute-0 nova_compute[194781]: 2025-10-02 19:57:59.699 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:57:59 compute-0 nova_compute[194781]: 2025-10-02 19:57:59.700 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:57:59 compute-0 podman[209015]: time="2025-10-02T19:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:57:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:57:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5223 "" "Go-http-client/1.1"
Oct 02 19:58:01 compute-0 openstack_network_exporter[211160]: ERROR   19:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:58:01 compute-0 openstack_network_exporter[211160]: ERROR   19:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:58:01 compute-0 openstack_network_exporter[211160]: ERROR   19:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:58:01 compute-0 openstack_network_exporter[211160]: ERROR   19:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:58:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:58:01 compute-0 openstack_network_exporter[211160]: ERROR   19:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:58:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:58:01 compute-0 nova_compute[194781]: 2025-10-02 19:58:01.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:02 compute-0 nova_compute[194781]: 2025-10-02 19:58:02.720 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:58:02 compute-0 nova_compute[194781]: 2025-10-02 19:58:02.739 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:58:02 compute-0 nova_compute[194781]: 2025-10-02 19:58:02.740 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:58:04 compute-0 nova_compute[194781]: 2025-10-02 19:58:04.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:05 compute-0 podman[270640]: 2025-10-02 19:58:05.214775395 +0000 UTC m=+0.141366918 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:58:05 compute-0 podman[270639]: 2025-10-02 19:58:05.219789115 +0000 UTC m=+0.148090242 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 19:58:06 compute-0 nova_compute[194781]: 2025-10-02 19:58:06.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:08 compute-0 podman[270676]: 2025-10-02 19:58:08.559238767 +0000 UTC m=+0.115293184 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_id=edpm, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=)
Oct 02 19:58:08 compute-0 podman[270677]: 2025-10-02 19:58:08.582800396 +0000 UTC m=+0.145345581 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, release=1214.1726694543, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm)
Oct 02 19:58:08 compute-0 podman[270678]: 2025-10-02 19:58:08.597417904 +0000 UTC m=+0.161931680 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 02 19:58:09 compute-0 nova_compute[194781]: 2025-10-02 19:58:09.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:11 compute-0 nova_compute[194781]: 2025-10-02 19:58:11.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:14 compute-0 nova_compute[194781]: 2025-10-02 19:58:14.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:16 compute-0 nova_compute[194781]: 2025-10-02 19:58:16.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:17 compute-0 podman[270736]: 2025-10-02 19:58:17.504132818 +0000 UTC m=+0.083747428 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 19:58:17 compute-0 podman[270737]: 2025-10-02 19:58:17.774567255 +0000 UTC m=+0.340245864 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:58:18 compute-0 sudo[266762]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:18 compute-0 sshd-session[266761]: Received disconnect from 192.168.122.10 port 52898:11: disconnected by user
Oct 02 19:58:18 compute-0 sshd-session[266761]: Disconnected from user zuul 192.168.122.10 port 52898
Oct 02 19:58:18 compute-0 sshd-session[266758]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:58:18 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Oct 02 19:58:18 compute-0 systemd[1]: session-33.scope: Consumed 1min 51.403s CPU time, 634.2M memory peak, read 232.8M from disk, written 44.8M to disk.
Oct 02 19:58:18 compute-0 systemd-logind[798]: Session 33 logged out. Waiting for processes to exit.
Oct 02 19:58:18 compute-0 systemd-logind[798]: Removed session 33.
Oct 02 19:58:18 compute-0 sshd-session[270779]: Accepted publickey for zuul from 192.168.122.10 port 57904 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 19:58:18 compute-0 systemd-logind[798]: New session 34 of user zuul.
Oct 02 19:58:18 compute-0 systemd[1]: Started Session 34 of User zuul.
Oct 02 19:58:18 compute-0 sshd-session[270779]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:58:19 compute-0 sudo[270783]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-10-02-clyzpfv.tar.xz
Oct 02 19:58:19 compute-0 sudo[270783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:58:19 compute-0 sudo[270783]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:19 compute-0 sshd-session[270782]: Received disconnect from 192.168.122.10 port 57904:11: disconnected by user
Oct 02 19:58:19 compute-0 sshd-session[270782]: Disconnected from user zuul 192.168.122.10 port 57904
Oct 02 19:58:19 compute-0 sshd-session[270779]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:58:19 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Oct 02 19:58:19 compute-0 systemd-logind[798]: Session 34 logged out. Waiting for processes to exit.
Oct 02 19:58:19 compute-0 systemd-logind[798]: Removed session 34.
Oct 02 19:58:19 compute-0 sshd-session[270808]: Accepted publickey for zuul from 192.168.122.10 port 57906 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 19:58:19 compute-0 systemd-logind[798]: New session 35 of user zuul.
Oct 02 19:58:19 compute-0 systemd[1]: Started Session 35 of User zuul.
Oct 02 19:58:19 compute-0 sshd-session[270808]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 19:58:19 compute-0 nova_compute[194781]: 2025-10-02 19:58:19.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:19 compute-0 sudo[270812]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 02 19:58:19 compute-0 sudo[270812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 19:58:19 compute-0 sudo[270812]: pam_unix(sudo:session): session closed for user root
Oct 02 19:58:19 compute-0 sshd-session[270811]: Received disconnect from 192.168.122.10 port 57906:11: disconnected by user
Oct 02 19:58:19 compute-0 sshd-session[270811]: Disconnected from user zuul 192.168.122.10 port 57906
Oct 02 19:58:19 compute-0 sshd-session[270808]: pam_unix(sshd:session): session closed for user zuul
Oct 02 19:58:19 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Oct 02 19:58:19 compute-0 systemd-logind[798]: Session 35 logged out. Waiting for processes to exit.
Oct 02 19:58:19 compute-0 systemd-logind[798]: Removed session 35.
Oct 02 19:58:20 compute-0 podman[270838]: 2025-10-02 19:58:20.78293266 +0000 UTC m=+0.137603011 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 19:58:20 compute-0 podman[270839]: 2025-10-02 19:58:20.841921927 +0000 UTC m=+0.197812690 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 19:58:21 compute-0 nova_compute[194781]: 2025-10-02 19:58:21.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:24 compute-0 nova_compute[194781]: 2025-10-02 19:58:24.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:24 compute-0 podman[270880]: 2025-10-02 19:58:24.735786824 +0000 UTC m=+0.103043497 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 19:58:26 compute-0 nova_compute[194781]: 2025-10-02 19:58:26.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:28 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 19:58:28 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 19:58:29 compute-0 nova_compute[194781]: 2025-10-02 19:58:29.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:29 compute-0 podman[209015]: time="2025-10-02T19:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:58:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:58:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5231 "" "Go-http-client/1.1"
Oct 02 19:58:31 compute-0 openstack_network_exporter[211160]: ERROR   19:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:58:31 compute-0 openstack_network_exporter[211160]: ERROR   19:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:58:31 compute-0 openstack_network_exporter[211160]: ERROR   19:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:58:31 compute-0 openstack_network_exporter[211160]: ERROR   19:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:58:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:58:31 compute-0 openstack_network_exporter[211160]: ERROR   19:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:58:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:58:31 compute-0 nova_compute[194781]: 2025-10-02 19:58:31.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:34 compute-0 nova_compute[194781]: 2025-10-02 19:58:34.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:35 compute-0 podman[270907]: 2025-10-02 19:58:35.768565065 +0000 UTC m=+0.128667030 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 19:58:35 compute-0 podman[270908]: 2025-10-02 19:58:35.777855476 +0000 UTC m=+0.135444846 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345)
Oct 02 19:58:36 compute-0 nova_compute[194781]: 2025-10-02 19:58:36.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:38 compute-0 podman[270945]: 2025-10-02 19:58:38.778997174 +0000 UTC m=+0.140164828 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, release=1755695350, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:58:38 compute-0 podman[270946]: 2025-10-02 19:58:38.790536183 +0000 UTC m=+0.142830007 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct 02 19:58:38 compute-0 podman[270947]: 2025-10-02 19:58:38.794790503 +0000 UTC m=+0.138717340 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 19:58:39 compute-0 nova_compute[194781]: 2025-10-02 19:58:39.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:41 compute-0 nova_compute[194781]: 2025-10-02 19:58:41.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:44 compute-0 nova_compute[194781]: 2025-10-02 19:58:44.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:45 compute-0 nova_compute[194781]: 2025-10-02 19:58:45.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:46 compute-0 nova_compute[194781]: 2025-10-02 19:58:46.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:46 compute-0 nova_compute[194781]: 2025-10-02 19:58:46.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:58:46 compute-0 nova_compute[194781]: 2025-10-02 19:58:46.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:58:47.509 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:58:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:58:47.510 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:58:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:58:47.511 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:58:47 compute-0 podman[271005]: 2025-10-02 19:58:47.7468185 +0000 UTC m=+0.106825095 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 19:58:48 compute-0 podman[271029]: 2025-10-02 19:58:48.732876652 +0000 UTC m=+0.102803521 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct 02 19:58:49 compute-0 nova_compute[194781]: 2025-10-02 19:58:49.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:50 compute-0 nova_compute[194781]: 2025-10-02 19:58:50.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:50 compute-0 nova_compute[194781]: 2025-10-02 19:58:50.117 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:58:50 compute-0 nova_compute[194781]: 2025-10-02 19:58:50.118 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:58:50 compute-0 nova_compute[194781]: 2025-10-02 19:58:50.119 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:58:50 compute-0 nova_compute[194781]: 2025-10-02 19:58:50.119 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:58:50 compute-0 nova_compute[194781]: 2025-10-02 19:58:50.293 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:50 compute-0 nova_compute[194781]: 2025-10-02 19:58:50.394 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:50 compute-0 nova_compute[194781]: 2025-10-02 19:58:50.395 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:50 compute-0 nova_compute[194781]: 2025-10-02 19:58:50.461 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:50 compute-0 nova_compute[194781]: 2025-10-02 19:58:50.462 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:50 compute-0 nova_compute[194781]: 2025-10-02 19:58:50.544 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:50 compute-0 nova_compute[194781]: 2025-10-02 19:58:50.546 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:58:50 compute-0 nova_compute[194781]: 2025-10-02 19:58:50.636 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:58:51 compute-0 nova_compute[194781]: 2025-10-02 19:58:51.050 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:58:51 compute-0 nova_compute[194781]: 2025-10-02 19:58:51.052 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5030MB free_disk=72.40158081054688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:58:51 compute-0 nova_compute[194781]: 2025-10-02 19:58:51.053 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:58:51 compute-0 nova_compute[194781]: 2025-10-02 19:58:51.054 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:58:51 compute-0 nova_compute[194781]: 2025-10-02 19:58:51.167 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:58:51 compute-0 nova_compute[194781]: 2025-10-02 19:58:51.168 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:58:51 compute-0 nova_compute[194781]: 2025-10-02 19:58:51.169 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:58:51 compute-0 nova_compute[194781]: 2025-10-02 19:58:51.230 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:58:51 compute-0 nova_compute[194781]: 2025-10-02 19:58:51.262 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:58:51 compute-0 nova_compute[194781]: 2025-10-02 19:58:51.264 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:58:51 compute-0 nova_compute[194781]: 2025-10-02 19:58:51.264 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:58:51 compute-0 nova_compute[194781]: 2025-10-02 19:58:51.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:51 compute-0 podman[271061]: 2025-10-02 19:58:51.810836908 +0000 UTC m=+0.164855077 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:58:51 compute-0 podman[271062]: 2025-10-02 19:58:51.814849352 +0000 UTC m=+0.168341897 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 19:58:52 compute-0 nova_compute[194781]: 2025-10-02 19:58:52.265 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:54 compute-0 nova_compute[194781]: 2025-10-02 19:58:54.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:55 compute-0 nova_compute[194781]: 2025-10-02 19:58:55.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:55 compute-0 podman[271102]: 2025-10-02 19:58:55.763002882 +0000 UTC m=+0.114024151 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 19:58:56 compute-0 nova_compute[194781]: 2025-10-02 19:58:56.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:56 compute-0 nova_compute[194781]: 2025-10-02 19:58:56.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:57 compute-0 nova_compute[194781]: 2025-10-02 19:58:57.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:59 compute-0 nova_compute[194781]: 2025-10-02 19:58:59.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:58:59 compute-0 nova_compute[194781]: 2025-10-02 19:58:59.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 19:58:59 compute-0 nova_compute[194781]: 2025-10-02 19:58:59.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 19:58:59 compute-0 nova_compute[194781]: 2025-10-02 19:58:59.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:58:59 compute-0 podman[209015]: time="2025-10-02T19:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:58:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:58:59 compute-0 nova_compute[194781]: 2025-10-02 19:58:59.763 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 19:58:59 compute-0 nova_compute[194781]: 2025-10-02 19:58:59.764 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 19:58:59 compute-0 nova_compute[194781]: 2025-10-02 19:58:59.764 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 19:58:59 compute-0 nova_compute[194781]: 2025-10-02 19:58:59.764 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 19:58:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5230 "" "Go-http-client/1.1"
Oct 02 19:59:01 compute-0 openstack_network_exporter[211160]: ERROR   19:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:59:01 compute-0 openstack_network_exporter[211160]: ERROR   19:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:59:01 compute-0 openstack_network_exporter[211160]: ERROR   19:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:59:01 compute-0 openstack_network_exporter[211160]: ERROR   19:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:59:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:59:01 compute-0 openstack_network_exporter[211160]: ERROR   19:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:59:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:59:01 compute-0 nova_compute[194781]: 2025-10-02 19:59:01.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:01 compute-0 nova_compute[194781]: 2025-10-02 19:59:01.745 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 19:59:01 compute-0 nova_compute[194781]: 2025-10-02 19:59:01.785 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 19:59:01 compute-0 nova_compute[194781]: 2025-10-02 19:59:01.785 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 19:59:01 compute-0 nova_compute[194781]: 2025-10-02 19:59:01.786 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:03 compute-0 nova_compute[194781]: 2025-10-02 19:59:03.782 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:04 compute-0 nova_compute[194781]: 2025-10-02 19:59:04.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:06 compute-0 nova_compute[194781]: 2025-10-02 19:59:06.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:06 compute-0 podman[271125]: 2025-10-02 19:59:06.759549236 +0000 UTC m=+0.126799394 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 19:59:06 compute-0 podman[271126]: 2025-10-02 19:59:06.798064621 +0000 UTC m=+0.151654426 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 19:59:09 compute-0 nova_compute[194781]: 2025-10-02 19:59:09.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:09 compute-0 podman[271165]: 2025-10-02 19:59:09.760673596 +0000 UTC m=+0.116601204 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Oct 02 19:59:09 compute-0 podman[271166]: 2025-10-02 19:59:09.778008547 +0000 UTC m=+0.124020651 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, name=ubi9, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64)
Oct 02 19:59:09 compute-0 podman[271167]: 2025-10-02 19:59:09.778804748 +0000 UTC m=+0.120075956 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd)
Oct 02 19:59:11 compute-0 nova_compute[194781]: 2025-10-02 19:59:11.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.952 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.952 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.963 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.964 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.964 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.965 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.965 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.965 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.966 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{'cpu': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.967 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{'cpu': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.967 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.968 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:12.971 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T19:59:12.968867) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.015 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 66810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.016 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.017 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.018 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T19:59:13.018271) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.020 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T19:59:13.021230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.028 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.029 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.029 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.030 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.030 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.030 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.031 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.031 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T19:59:13.030133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.032 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.032 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.032 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.032 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.033 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.033 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T19:59:13.032623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.033 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.034 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.034 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.034 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.034 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.035 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.036 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T19:59:13.035026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.036 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.037 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.038 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.039 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.039 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.039 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T19:59:13.037772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.040 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.040 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T19:59:13.040559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.133 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.134 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.134 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.135 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.135 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.135 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.136 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.136 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.137 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.137 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.138 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.138 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T19:59:13.136530) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.139 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.139 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.139 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.139 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.140 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.140 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.140 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.141 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.141 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.141 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T19:59:13.139523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T19:59:13.141476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.187 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.188 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.189 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.190 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.190 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.190 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.191 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.192 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.192 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.194 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.194 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T19:59:13.191512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.195 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.196 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.196 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T19:59:13.195507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.198 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.198 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.198 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.199 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.200 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T19:59:13.198975) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.201 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.201 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T19:59:13.201465) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.202 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.203 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.203 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.204 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.204 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.204 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.204 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.204 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.205 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.206 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.206 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.207 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.207 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.207 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.207 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.207 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.207 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.208 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.208 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.209 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.209 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.210 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.210 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.210 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.211 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.211 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.212 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.212 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.213 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.213 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.213 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.213 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.213 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.214 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T19:59:13.204763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.214 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T19:59:13.207738) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.215 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T19:59:13.210866) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.215 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.215 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.215 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.216 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.216 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.216 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.216 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.216 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T19:59:13.213886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.217 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T19:59:13.216779) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.217 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.217 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.217 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.217 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.218 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.218 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.218 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T19:59:13.217984) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.218 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.219 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.219 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.219 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.219 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.219 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.219 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.220 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.220 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.220 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.220 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.220 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.220 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.221 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.221 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.221 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.221 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T19:59:13.219250) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.221 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.221 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T19:59:13.220293) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.221 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.222 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T19:59:13.221696) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.222 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.223 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.223 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.223 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 19:59:13.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 19:59:14 compute-0 nova_compute[194781]: 2025-10-02 19:59:14.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:16 compute-0 nova_compute[194781]: 2025-10-02 19:59:16.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:18 compute-0 podman[271223]: 2025-10-02 19:59:18.745315381 +0000 UTC m=+0.106668990 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 19:59:18 compute-0 podman[271246]: 2025-10-02 19:59:18.909481289 +0000 UTC m=+0.110797700 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:59:19 compute-0 nova_compute[194781]: 2025-10-02 19:59:19.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:21 compute-0 nova_compute[194781]: 2025-10-02 19:59:21.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:22 compute-0 podman[271265]: 2025-10-02 19:59:22.7590944 +0000 UTC m=+0.122049288 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 19:59:22 compute-0 podman[271266]: 2025-10-02 19:59:22.834103956 +0000 UTC m=+0.193623823 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 19:59:24 compute-0 nova_compute[194781]: 2025-10-02 19:59:24.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:26 compute-0 nova_compute[194781]: 2025-10-02 19:59:26.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:26 compute-0 podman[271307]: 2025-10-02 19:59:26.743762027 +0000 UTC m=+0.108855517 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 19:59:29 compute-0 nova_compute[194781]: 2025-10-02 19:59:29.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:29 compute-0 podman[209015]: time="2025-10-02T19:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:59:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:59:29 compute-0 podman[209015]: @ - - [02/Oct/2025:19:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5230 "" "Go-http-client/1.1"
Oct 02 19:59:31 compute-0 openstack_network_exporter[211160]: ERROR   19:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:59:31 compute-0 openstack_network_exporter[211160]: ERROR   19:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 19:59:31 compute-0 openstack_network_exporter[211160]: ERROR   19:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 19:59:31 compute-0 openstack_network_exporter[211160]: ERROR   19:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 19:59:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:59:31 compute-0 openstack_network_exporter[211160]: ERROR   19:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 19:59:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 19:59:31 compute-0 nova_compute[194781]: 2025-10-02 19:59:31.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:34 compute-0 nova_compute[194781]: 2025-10-02 19:59:34.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:36 compute-0 nova_compute[194781]: 2025-10-02 19:59:36.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:37 compute-0 podman[271330]: 2025-10-02 19:59:37.748651493 +0000 UTC m=+0.105932470 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 19:59:37 compute-0 podman[271331]: 2025-10-02 19:59:37.778445825 +0000 UTC m=+0.128482199 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true)
Oct 02 19:59:39 compute-0 nova_compute[194781]: 2025-10-02 19:59:39.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:40 compute-0 podman[271367]: 2025-10-02 19:59:40.74116039 +0000 UTC m=+0.105037356 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, distribution-scope=public, release=1214.1726694543, managed_by=edpm_ansible, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vendor=Red Hat, Inc.)
Oct 02 19:59:40 compute-0 podman[271366]: 2025-10-02 19:59:40.764590573 +0000 UTC m=+0.131738916 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, config_id=edpm, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 19:59:40 compute-0 podman[271368]: 2025-10-02 19:59:40.785924531 +0000 UTC m=+0.132405104 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 19:59:41 compute-0 nova_compute[194781]: 2025-10-02 19:59:41.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:44 compute-0 nova_compute[194781]: 2025-10-02 19:59:44.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:46 compute-0 nova_compute[194781]: 2025-10-02 19:59:46.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:46 compute-0 nova_compute[194781]: 2025-10-02 19:59:46.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 19:59:46 compute-0 nova_compute[194781]: 2025-10-02 19:59:46.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:47 compute-0 nova_compute[194781]: 2025-10-02 19:59:47.036 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:59:47.512 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:59:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:59:47.514 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:59:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 19:59:47.517 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:59:49 compute-0 nova_compute[194781]: 2025-10-02 19:59:49.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:49 compute-0 podman[271426]: 2025-10-02 19:59:49.749577871 +0000 UTC m=+0.107919602 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 19:59:49 compute-0 podman[271425]: 2025-10-02 19:59:49.750117416 +0000 UTC m=+0.111375465 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.073 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.074 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.075 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.075 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.171 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.265 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.266 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.327 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.329 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.429 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.431 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.493 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 19:59:51 compute-0 nova_compute[194781]: 2025-10-02 19:59:51.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:52 compute-0 nova_compute[194781]: 2025-10-02 19:59:52.040 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 19:59:52 compute-0 nova_compute[194781]: 2025-10-02 19:59:52.041 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5032MB free_disk=72.40179443359375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 19:59:52 compute-0 nova_compute[194781]: 2025-10-02 19:59:52.042 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 19:59:52 compute-0 nova_compute[194781]: 2025-10-02 19:59:52.042 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 19:59:52 compute-0 nova_compute[194781]: 2025-10-02 19:59:52.264 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 19:59:52 compute-0 nova_compute[194781]: 2025-10-02 19:59:52.265 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 19:59:52 compute-0 nova_compute[194781]: 2025-10-02 19:59:52.265 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 19:59:52 compute-0 nova_compute[194781]: 2025-10-02 19:59:52.317 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 19:59:52 compute-0 nova_compute[194781]: 2025-10-02 19:59:52.355 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 19:59:52 compute-0 nova_compute[194781]: 2025-10-02 19:59:52.358 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 19:59:52 compute-0 nova_compute[194781]: 2025-10-02 19:59:52.358 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 19:59:53 compute-0 podman[271480]: 2025-10-02 19:59:53.74750587 +0000 UTC m=+0.106630098 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 19:59:53 compute-0 podman[271481]: 2025-10-02 19:59:53.875988019 +0000 UTC m=+0.224414753 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 19:59:54 compute-0 nova_compute[194781]: 2025-10-02 19:59:54.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:56 compute-0 nova_compute[194781]: 2025-10-02 19:59:56.359 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:56 compute-0 nova_compute[194781]: 2025-10-02 19:59:56.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:57 compute-0 podman[271524]: 2025-10-02 19:59:57.736160703 +0000 UTC m=+0.097352811 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 19:59:58 compute-0 nova_compute[194781]: 2025-10-02 19:59:58.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:59 compute-0 nova_compute[194781]: 2025-10-02 19:59:59.029 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 19:59:59 compute-0 nova_compute[194781]: 2025-10-02 19:59:59.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 19:59:59 compute-0 podman[209015]: time="2025-10-02T19:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 19:59:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 19:59:59 compute-0 podman[209015]: @ - - [02/Oct/2025:19:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5236 "" "Go-http-client/1.1"
Oct 02 20:00:00 compute-0 nova_compute[194781]: 2025-10-02 20:00:00.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:00 compute-0 nova_compute[194781]: 2025-10-02 20:00:00.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:00:00 compute-0 nova_compute[194781]: 2025-10-02 20:00:00.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:00:00 compute-0 nova_compute[194781]: 2025-10-02 20:00:00.809 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:00:00 compute-0 nova_compute[194781]: 2025-10-02 20:00:00.811 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:00:00 compute-0 nova_compute[194781]: 2025-10-02 20:00:00.812 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:00:00 compute-0 nova_compute[194781]: 2025-10-02 20:00:00.813 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:00:01 compute-0 openstack_network_exporter[211160]: ERROR   20:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:00:01 compute-0 openstack_network_exporter[211160]: ERROR   20:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:00:01 compute-0 openstack_network_exporter[211160]: ERROR   20:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:00:01 compute-0 openstack_network_exporter[211160]: ERROR   20:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:00:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:00:01 compute-0 openstack_network_exporter[211160]: ERROR   20:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:00:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:00:01 compute-0 nova_compute[194781]: 2025-10-02 20:00:01.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:02 compute-0 nova_compute[194781]: 2025-10-02 20:00:02.608 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:00:02 compute-0 nova_compute[194781]: 2025-10-02 20:00:02.634 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:00:02 compute-0 nova_compute[194781]: 2025-10-02 20:00:02.635 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:00:02 compute-0 nova_compute[194781]: 2025-10-02 20:00:02.637 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:04 compute-0 nova_compute[194781]: 2025-10-02 20:00:04.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:06 compute-0 nova_compute[194781]: 2025-10-02 20:00:06.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:08 compute-0 podman[271548]: 2025-10-02 20:00:08.745011283 +0000 UTC m=+0.108603610 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:00:08 compute-0 podman[271549]: 2025-10-02 20:00:08.768821767 +0000 UTC m=+0.127368330 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 02 20:00:09 compute-0 nova_compute[194781]: 2025-10-02 20:00:09.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:11 compute-0 nova_compute[194781]: 2025-10-02 20:00:11.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:11 compute-0 podman[271590]: 2025-10-02 20:00:11.728891246 +0000 UTC m=+0.092661626 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.buildah.version=1.29.0, release=1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, version=9.4)
Oct 02 20:00:11 compute-0 podman[271591]: 2025-10-02 20:00:11.748058416 +0000 UTC m=+0.099686133 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Oct 02 20:00:11 compute-0 podman[271589]: 2025-10-02 20:00:11.762686965 +0000 UTC m=+0.129012423 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., container_name=openstack_network_exporter)
Oct 02 20:00:14 compute-0 nova_compute[194781]: 2025-10-02 20:00:14.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:16 compute-0 nova_compute[194781]: 2025-10-02 20:00:16.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:19 compute-0 nova_compute[194781]: 2025-10-02 20:00:19.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:20 compute-0 podman[271644]: 2025-10-02 20:00:20.727972417 +0000 UTC m=+0.091327681 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:00:20 compute-0 podman[271645]: 2025-10-02 20:00:20.735986261 +0000 UTC m=+0.092993926 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 20:00:21 compute-0 nova_compute[194781]: 2025-10-02 20:00:21.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:24 compute-0 nova_compute[194781]: 2025-10-02 20:00:24.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:24 compute-0 podman[271687]: 2025-10-02 20:00:24.694675906 +0000 UTC m=+0.072565824 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 20:00:24 compute-0 podman[271688]: 2025-10-02 20:00:24.744401445 +0000 UTC m=+0.114329280 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:00:26 compute-0 nova_compute[194781]: 2025-10-02 20:00:26.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:28 compute-0 podman[271729]: 2025-10-02 20:00:28.726595975 +0000 UTC m=+0.093273172 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 02 20:00:29 compute-0 nova_compute[194781]: 2025-10-02 20:00:29.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:29 compute-0 podman[209015]: time="2025-10-02T20:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:00:29 compute-0 podman[209015]: @ - - [02/Oct/2025:20:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 20:00:29 compute-0 podman[209015]: @ - - [02/Oct/2025:20:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5233 "" "Go-http-client/1.1"
Oct 02 20:00:31 compute-0 openstack_network_exporter[211160]: ERROR   20:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:00:31 compute-0 openstack_network_exporter[211160]: ERROR   20:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:00:31 compute-0 openstack_network_exporter[211160]: ERROR   20:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:00:31 compute-0 openstack_network_exporter[211160]: ERROR   20:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:00:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:00:31 compute-0 openstack_network_exporter[211160]: ERROR   20:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:00:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:00:31 compute-0 nova_compute[194781]: 2025-10-02 20:00:31.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:34 compute-0 nova_compute[194781]: 2025-10-02 20:00:34.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:36 compute-0 nova_compute[194781]: 2025-10-02 20:00:36.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:39 compute-0 nova_compute[194781]: 2025-10-02 20:00:39.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:39 compute-0 podman[271753]: 2025-10-02 20:00:39.74279423 +0000 UTC m=+0.105350840 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 02 20:00:39 compute-0 podman[271752]: 2025-10-02 20:00:39.797955519 +0000 UTC m=+0.156400804 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 20:00:41 compute-0 nova_compute[194781]: 2025-10-02 20:00:41.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:42 compute-0 podman[271791]: 2025-10-02 20:00:42.718646196 +0000 UTC m=+0.085594577 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:00:42 compute-0 podman[271789]: 2025-10-02 20:00:42.744049144 +0000 UTC m=+0.122917889 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6)
Oct 02 20:00:42 compute-0 podman[271790]: 2025-10-02 20:00:42.751985377 +0000 UTC m=+0.121104883 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, vcs-type=git, release-0.7.12=, managed_by=edpm_ansible, version=9.4, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, container_name=kepler)
Oct 02 20:00:44 compute-0 nova_compute[194781]: 2025-10-02 20:00:44.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:46 compute-0 nova_compute[194781]: 2025-10-02 20:00:46.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:00:47.514 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:00:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:00:47.515 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:00:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:00:47.515 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:00:48 compute-0 nova_compute[194781]: 2025-10-02 20:00:48.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:48 compute-0 nova_compute[194781]: 2025-10-02 20:00:48.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:00:49 compute-0 nova_compute[194781]: 2025-10-02 20:00:49.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:49 compute-0 nova_compute[194781]: 2025-10-02 20:00:49.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:51 compute-0 nova_compute[194781]: 2025-10-02 20:00:51.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:51 compute-0 podman[271845]: 2025-10-02 20:00:51.718950099 +0000 UTC m=+0.090167533 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:00:51 compute-0 podman[271846]: 2025-10-02 20:00:51.735862121 +0000 UTC m=+0.098120946 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.065 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.066 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.067 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.067 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.149 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.219 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.220 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.277 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.279 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.337 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.338 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.398 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.852 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.854 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5044MB free_disk=72.40179443359375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.854 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:00:53 compute-0 nova_compute[194781]: 2025-10-02 20:00:53.855 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:00:54 compute-0 nova_compute[194781]: 2025-10-02 20:00:54.162 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:00:54 compute-0 nova_compute[194781]: 2025-10-02 20:00:54.163 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:00:54 compute-0 nova_compute[194781]: 2025-10-02 20:00:54.163 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:00:54 compute-0 nova_compute[194781]: 2025-10-02 20:00:54.249 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing inventories for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 20:00:54 compute-0 nova_compute[194781]: 2025-10-02 20:00:54.405 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating ProviderTree inventory for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 20:00:54 compute-0 nova_compute[194781]: 2025-10-02 20:00:54.405 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 20:00:54 compute-0 nova_compute[194781]: 2025-10-02 20:00:54.436 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing aggregate associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 20:00:54 compute-0 nova_compute[194781]: 2025-10-02 20:00:54.465 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing trait associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,HW_CPU_X86_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 20:00:54 compute-0 nova_compute[194781]: 2025-10-02 20:00:54.524 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:00:54 compute-0 nova_compute[194781]: 2025-10-02 20:00:54.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:54 compute-0 nova_compute[194781]: 2025-10-02 20:00:54.579 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:00:54 compute-0 nova_compute[194781]: 2025-10-02 20:00:54.581 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:00:54 compute-0 nova_compute[194781]: 2025-10-02 20:00:54.582 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:00:55 compute-0 podman[271902]: 2025-10-02 20:00:55.751899634 +0000 UTC m=+0.114487274 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 20:00:55 compute-0 podman[271903]: 2025-10-02 20:00:55.824709703 +0000 UTC m=+0.180309565 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 20:00:56 compute-0 nova_compute[194781]: 2025-10-02 20:00:56.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:58 compute-0 nova_compute[194781]: 2025-10-02 20:00:58.582 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:00:59 compute-0 nova_compute[194781]: 2025-10-02 20:00:59.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:00:59 compute-0 podman[209015]: time="2025-10-02T20:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:00:59 compute-0 podman[209015]: @ - - [02/Oct/2025:20:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 20:00:59 compute-0 podman[209015]: @ - - [02/Oct/2025:20:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5226 "" "Go-http-client/1.1"
Oct 02 20:00:59 compute-0 podman[271946]: 2025-10-02 20:00:59.770047592 +0000 UTC m=+0.133434788 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:01:00 compute-0 nova_compute[194781]: 2025-10-02 20:01:00.030 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:00 compute-0 nova_compute[194781]: 2025-10-02 20:01:00.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:01 compute-0 nova_compute[194781]: 2025-10-02 20:01:01.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:01 compute-0 nova_compute[194781]: 2025-10-02 20:01:01.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:01:01 compute-0 nova_compute[194781]: 2025-10-02 20:01:01.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:01:01 compute-0 CROND[271971]: (root) CMD (run-parts /etc/cron.hourly)
Oct 02 20:01:01 compute-0 run-parts[271974]: (/etc/cron.hourly) starting 0anacron
Oct 02 20:01:01 compute-0 run-parts[271980]: (/etc/cron.hourly) finished 0anacron
Oct 02 20:01:01 compute-0 CROND[271970]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 02 20:01:01 compute-0 openstack_network_exporter[211160]: ERROR   20:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:01:01 compute-0 openstack_network_exporter[211160]: ERROR   20:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:01:01 compute-0 openstack_network_exporter[211160]: ERROR   20:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:01:01 compute-0 openstack_network_exporter[211160]: ERROR   20:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:01:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:01:01 compute-0 openstack_network_exporter[211160]: ERROR   20:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:01:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:01:01 compute-0 nova_compute[194781]: 2025-10-02 20:01:01.448 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:01:01 compute-0 nova_compute[194781]: 2025-10-02 20:01:01.449 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:01:01 compute-0 nova_compute[194781]: 2025-10-02 20:01:01.450 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:01:01 compute-0 nova_compute[194781]: 2025-10-02 20:01:01.451 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:01:01 compute-0 nova_compute[194781]: 2025-10-02 20:01:01.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:03 compute-0 nova_compute[194781]: 2025-10-02 20:01:03.168 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:01:03 compute-0 nova_compute[194781]: 2025-10-02 20:01:03.251 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:01:03 compute-0 nova_compute[194781]: 2025-10-02 20:01:03.252 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:01:03 compute-0 nova_compute[194781]: 2025-10-02 20:01:03.254 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:04 compute-0 nova_compute[194781]: 2025-10-02 20:01:04.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:06 compute-0 nova_compute[194781]: 2025-10-02 20:01:06.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:07 compute-0 nova_compute[194781]: 2025-10-02 20:01:07.250 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:09 compute-0 nova_compute[194781]: 2025-10-02 20:01:09.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:10 compute-0 nova_compute[194781]: 2025-10-02 20:01:10.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:10 compute-0 nova_compute[194781]: 2025-10-02 20:01:10.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 20:01:10 compute-0 nova_compute[194781]: 2025-10-02 20:01:10.083 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 20:01:10 compute-0 podman[271981]: 2025-10-02 20:01:10.765054337 +0000 UTC m=+0.112463383 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 20:01:10 compute-0 podman[271982]: 2025-10-02 20:01:10.786563396 +0000 UTC m=+0.125215888 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Oct 02 20:01:11 compute-0 nova_compute[194781]: 2025-10-02 20:01:11.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:12 compute-0 nova_compute[194781]: 2025-10-02 20:01:12.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:12 compute-0 nova_compute[194781]: 2025-10-02 20:01:12.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.952 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.953 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.962 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.963 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.963 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.964 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:12.965 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:01:12.963816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.005 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 68730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.007 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.008 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.009 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:01:13.008339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:01:13.010557) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.017 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.019 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.019 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:01:13.019544) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.021 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.022 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.023 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:01:13.022036) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.024 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:01:13.024323) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.026 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.026 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.026 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.026 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.027 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.028 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:01:13.026857) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.029 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:01:13.029812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.118 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.119 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.119 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.121 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.121 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.122 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.123 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.123 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:01:13.121790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.124 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.124 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.124 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.125 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:01:13.124371) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.126 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.126 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.126 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:01:13.126678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.170 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.171 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.171 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.172 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.172 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.173 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.173 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.173 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.173 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.174 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:01:13.173492) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.174 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.174 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.175 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.175 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.176 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.176 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.176 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.176 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.177 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.177 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.178 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.179 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:01:13.176805) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.180 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:01:13.180026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.182 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.183 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:01:13.182359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.183 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.184 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.184 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.185 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.185 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.186 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:01:13.185353) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.186 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.188 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.189 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:01:13.188838) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.189 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.190 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.191 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.192 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:01:13.192022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.192 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.193 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.194 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.194 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.195 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:01:13.195143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.196 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.196 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.197 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.198 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.198 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.198 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.198 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.199 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.199 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.199 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.200 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.200 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.201 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.201 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.201 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.202 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.202 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.203 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.203 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.203 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.204 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.204 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.204 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.204 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.205 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.206 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.207 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:01:13.198257) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:01:13.199322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:01:13.200352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:01:13.201352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:01:13.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:01:13.202330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:01:13 compute-0 podman[272023]: 2025-10-02 20:01:13.788061057 +0000 UTC m=+0.142005326 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, version=9.6, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., name=ubi9-minimal)
Oct 02 20:01:13 compute-0 podman[272024]: 2025-10-02 20:01:13.793946118 +0000 UTC m=+0.134619928 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, version=9.4, maintainer=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, name=ubi9, architecture=x86_64, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 02 20:01:13 compute-0 podman[272025]: 2025-10-02 20:01:13.802950768 +0000 UTC m=+0.137264386 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 20:01:14 compute-0 nova_compute[194781]: 2025-10-02 20:01:14.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:16 compute-0 nova_compute[194781]: 2025-10-02 20:01:16.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:19 compute-0 nova_compute[194781]: 2025-10-02 20:01:19.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:21 compute-0 nova_compute[194781]: 2025-10-02 20:01:21.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:21 compute-0 nova_compute[194781]: 2025-10-02 20:01:21.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:22 compute-0 podman[272081]: 2025-10-02 20:01:22.740170614 +0000 UTC m=+0.120510718 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 02 20:01:22 compute-0 podman[272082]: 2025-10-02 20:01:22.767481311 +0000 UTC m=+0.133437248 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 20:01:24 compute-0 nova_compute[194781]: 2025-10-02 20:01:24.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:26 compute-0 nova_compute[194781]: 2025-10-02 20:01:26.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:26 compute-0 podman[272122]: 2025-10-02 20:01:26.721433949 +0000 UTC m=+0.096539616 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 20:01:26 compute-0 podman[272123]: 2025-10-02 20:01:26.80998078 +0000 UTC m=+0.174272501 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:01:29 compute-0 nova_compute[194781]: 2025-10-02 20:01:29.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:29 compute-0 podman[209015]: time="2025-10-02T20:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:01:29 compute-0 podman[209015]: @ - - [02/Oct/2025:20:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 20:01:29 compute-0 podman[209015]: @ - - [02/Oct/2025:20:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5227 "" "Go-http-client/1.1"
Oct 02 20:01:30 compute-0 podman[272164]: 2025-10-02 20:01:30.77351495 +0000 UTC m=+0.137442450 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:01:31 compute-0 openstack_network_exporter[211160]: ERROR   20:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:01:31 compute-0 openstack_network_exporter[211160]: ERROR   20:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:01:31 compute-0 openstack_network_exporter[211160]: ERROR   20:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:01:31 compute-0 openstack_network_exporter[211160]: ERROR   20:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:01:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:01:31 compute-0 openstack_network_exporter[211160]: ERROR   20:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:01:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:01:31 compute-0 nova_compute[194781]: 2025-10-02 20:01:31.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:34 compute-0 nova_compute[194781]: 2025-10-02 20:01:34.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:36 compute-0 nova_compute[194781]: 2025-10-02 20:01:36.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:39 compute-0 nova_compute[194781]: 2025-10-02 20:01:39.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:41 compute-0 nova_compute[194781]: 2025-10-02 20:01:41.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:41 compute-0 podman[272189]: 2025-10-02 20:01:41.761685744 +0000 UTC m=+0.131098328 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:01:41 compute-0 podman[272188]: 2025-10-02 20:01:41.773034753 +0000 UTC m=+0.136710291 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 20:01:44 compute-0 nova_compute[194781]: 2025-10-02 20:01:44.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:44 compute-0 podman[272226]: 2025-10-02 20:01:44.782260662 +0000 UTC m=+0.136428214 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350)
Oct 02 20:01:44 compute-0 podman[272227]: 2025-10-02 20:01:44.782504668 +0000 UTC m=+0.131062237 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public)
Oct 02 20:01:44 compute-0 podman[272228]: 2025-10-02 20:01:44.783465993 +0000 UTC m=+0.125898376 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 20:01:46 compute-0 nova_compute[194781]: 2025-10-02 20:01:46.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:01:47.516 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:01:47.516 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:01:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:01:47.517 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:01:48 compute-0 nova_compute[194781]: 2025-10-02 20:01:48.061 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:48 compute-0 nova_compute[194781]: 2025-10-02 20:01:48.063 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:01:49 compute-0 nova_compute[194781]: 2025-10-02 20:01:49.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:51 compute-0 nova_compute[194781]: 2025-10-02 20:01:51.037 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:51 compute-0 nova_compute[194781]: 2025-10-02 20:01:51.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:53 compute-0 podman[272284]: 2025-10-02 20:01:53.738062885 +0000 UTC m=+0.093306353 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct 02 20:01:53 compute-0 podman[272283]: 2025-10-02 20:01:53.760020966 +0000 UTC m=+0.129298592 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:01:54 compute-0 nova_compute[194781]: 2025-10-02 20:01:54.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:55 compute-0 nova_compute[194781]: 2025-10-02 20:01:55.037 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:55 compute-0 nova_compute[194781]: 2025-10-02 20:01:55.038 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:55 compute-0 nova_compute[194781]: 2025-10-02 20:01:55.109 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:01:55 compute-0 nova_compute[194781]: 2025-10-02 20:01:55.110 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:01:55 compute-0 nova_compute[194781]: 2025-10-02 20:01:55.111 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:01:55 compute-0 nova_compute[194781]: 2025-10-02 20:01:55.112 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:01:55 compute-0 nova_compute[194781]: 2025-10-02 20:01:55.275 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:01:55 compute-0 nova_compute[194781]: 2025-10-02 20:01:55.375 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:01:55 compute-0 nova_compute[194781]: 2025-10-02 20:01:55.377 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:01:55 compute-0 nova_compute[194781]: 2025-10-02 20:01:55.470 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:01:55 compute-0 nova_compute[194781]: 2025-10-02 20:01:55.471 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:01:55 compute-0 nova_compute[194781]: 2025-10-02 20:01:55.568 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:01:55 compute-0 nova_compute[194781]: 2025-10-02 20:01:55.570 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:01:55 compute-0 nova_compute[194781]: 2025-10-02 20:01:55.638 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:01:56 compute-0 nova_compute[194781]: 2025-10-02 20:01:56.131 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:01:56 compute-0 nova_compute[194781]: 2025-10-02 20:01:56.133 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5053MB free_disk=72.40179443359375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:01:56 compute-0 nova_compute[194781]: 2025-10-02 20:01:56.134 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:01:56 compute-0 nova_compute[194781]: 2025-10-02 20:01:56.135 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:01:56 compute-0 nova_compute[194781]: 2025-10-02 20:01:56.234 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:01:56 compute-0 nova_compute[194781]: 2025-10-02 20:01:56.234 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:01:56 compute-0 nova_compute[194781]: 2025-10-02 20:01:56.235 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:01:56 compute-0 nova_compute[194781]: 2025-10-02 20:01:56.283 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:01:56 compute-0 nova_compute[194781]: 2025-10-02 20:01:56.321 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:01:56 compute-0 nova_compute[194781]: 2025-10-02 20:01:56.324 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:01:56 compute-0 nova_compute[194781]: 2025-10-02 20:01:56.325 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:01:56 compute-0 nova_compute[194781]: 2025-10-02 20:01:56.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:57 compute-0 nova_compute[194781]: 2025-10-02 20:01:57.429 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:57 compute-0 nova_compute[194781]: 2025-10-02 20:01:57.467 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Triggering sync for uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 20:01:57 compute-0 nova_compute[194781]: 2025-10-02 20:01:57.468 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:01:57 compute-0 nova_compute[194781]: 2025-10-02 20:01:57.469 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:01:57 compute-0 nova_compute[194781]: 2025-10-02 20:01:57.500 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "7aab78e5-2ff6-460d-87d6-f4c21f2d4403" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:01:57 compute-0 podman[272336]: 2025-10-02 20:01:57.715589765 +0000 UTC m=+0.066189961 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:01:57 compute-0 podman[272337]: 2025-10-02 20:01:57.796358177 +0000 UTC m=+0.143910955 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 20:01:59 compute-0 nova_compute[194781]: 2025-10-02 20:01:59.073 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:01:59 compute-0 nova_compute[194781]: 2025-10-02 20:01:59.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:01:59 compute-0 podman[209015]: time="2025-10-02T20:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:01:59 compute-0 podman[209015]: @ - - [02/Oct/2025:20:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 20:01:59 compute-0 podman[209015]: @ - - [02/Oct/2025:20:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5233 "" "Go-http-client/1.1"
Oct 02 20:02:00 compute-0 nova_compute[194781]: 2025-10-02 20:02:00.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:01 compute-0 openstack_network_exporter[211160]: ERROR   20:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:02:01 compute-0 openstack_network_exporter[211160]: ERROR   20:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:02:01 compute-0 openstack_network_exporter[211160]: ERROR   20:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:02:01 compute-0 openstack_network_exporter[211160]: ERROR   20:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:02:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:02:01 compute-0 openstack_network_exporter[211160]: ERROR   20:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:02:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:02:01 compute-0 nova_compute[194781]: 2025-10-02 20:02:01.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:01 compute-0 podman[272376]: 2025-10-02 20:02:01.733880215 +0000 UTC m=+0.100478126 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:02:02 compute-0 nova_compute[194781]: 2025-10-02 20:02:02.030 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:02 compute-0 nova_compute[194781]: 2025-10-02 20:02:02.032 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:02 compute-0 nova_compute[194781]: 2025-10-02 20:02:02.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:02:02 compute-0 nova_compute[194781]: 2025-10-02 20:02:02.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:02:02 compute-0 nova_compute[194781]: 2025-10-02 20:02:02.935 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:02:02 compute-0 nova_compute[194781]: 2025-10-02 20:02:02.936 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:02:02 compute-0 nova_compute[194781]: 2025-10-02 20:02:02.936 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:02:02 compute-0 nova_compute[194781]: 2025-10-02 20:02:02.936 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:02:03 compute-0 unix_chkpwd[272400]: password check failed for user (root)
Oct 02 20:02:03 compute-0 sshd-session[272398]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99  user=root
Oct 02 20:02:04 compute-0 nova_compute[194781]: 2025-10-02 20:02:04.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:05 compute-0 sshd-session[272398]: Failed password for root from 193.46.255.99 port 20048 ssh2
Oct 02 20:02:05 compute-0 nova_compute[194781]: 2025-10-02 20:02:05.946 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:02:05 compute-0 nova_compute[194781]: 2025-10-02 20:02:05.970 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:02:05 compute-0 nova_compute[194781]: 2025-10-02 20:02:05.971 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:02:05 compute-0 nova_compute[194781]: 2025-10-02 20:02:05.972 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:06 compute-0 unix_chkpwd[272401]: password check failed for user (root)
Oct 02 20:02:06 compute-0 nova_compute[194781]: 2025-10-02 20:02:06.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:08 compute-0 sshd-session[272398]: Failed password for root from 193.46.255.99 port 20048 ssh2
Oct 02 20:02:09 compute-0 unix_chkpwd[272402]: password check failed for user (root)
Oct 02 20:02:09 compute-0 nova_compute[194781]: 2025-10-02 20:02:09.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:11 compute-0 nova_compute[194781]: 2025-10-02 20:02:11.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:11 compute-0 sshd-session[272398]: Failed password for root from 193.46.255.99 port 20048 ssh2
Oct 02 20:02:12 compute-0 sshd-session[272398]: Received disconnect from 193.46.255.99 port 20048:11:  [preauth]
Oct 02 20:02:12 compute-0 sshd-session[272398]: Disconnected from authenticating user root 193.46.255.99 port 20048 [preauth]
Oct 02 20:02:12 compute-0 sshd-session[272398]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99  user=root
Oct 02 20:02:12 compute-0 podman[272403]: 2025-10-02 20:02:12.782487714 +0000 UTC m=+0.139318548 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=edpm, io.buildah.version=1.41.3)
Oct 02 20:02:12 compute-0 podman[272404]: 2025-10-02 20:02:12.815708892 +0000 UTC m=+0.166332247 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:02:13 compute-0 unix_chkpwd[272444]: password check failed for user (root)
Oct 02 20:02:13 compute-0 sshd-session[272435]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99  user=root
Oct 02 20:02:14 compute-0 nova_compute[194781]: 2025-10-02 20:02:14.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:15 compute-0 sshd-session[272435]: Failed password for root from 193.46.255.99 port 35778 ssh2
Oct 02 20:02:15 compute-0 podman[272445]: 2025-10-02 20:02:15.752352597 +0000 UTC m=+0.118251530 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.openshift.expose-services=)
Oct 02 20:02:15 compute-0 podman[272447]: 2025-10-02 20:02:15.763099672 +0000 UTC m=+0.105710720 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2)
Oct 02 20:02:15 compute-0 podman[272446]: 2025-10-02 20:02:15.791692032 +0000 UTC m=+0.143745131 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, release-0.7.12=, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30)
Oct 02 20:02:16 compute-0 unix_chkpwd[272505]: password check failed for user (root)
Oct 02 20:02:16 compute-0 nova_compute[194781]: 2025-10-02 20:02:16.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:17 compute-0 sshd-session[272435]: Failed password for root from 193.46.255.99 port 35778 ssh2
Oct 02 20:02:18 compute-0 unix_chkpwd[272506]: password check failed for user (root)
Oct 02 20:02:19 compute-0 nova_compute[194781]: 2025-10-02 20:02:19.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:20 compute-0 sshd-session[272435]: Failed password for root from 193.46.255.99 port 35778 ssh2
Oct 02 20:02:21 compute-0 sshd-session[272435]: Received disconnect from 193.46.255.99 port 35778:11:  [preauth]
Oct 02 20:02:21 compute-0 sshd-session[272435]: Disconnected from authenticating user root 193.46.255.99 port 35778 [preauth]
Oct 02 20:02:21 compute-0 sshd-session[272435]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99  user=root
Oct 02 20:02:21 compute-0 nova_compute[194781]: 2025-10-02 20:02:21.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:22 compute-0 unix_chkpwd[272510]: password check failed for user (root)
Oct 02 20:02:22 compute-0 sshd-session[272508]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99  user=root
Oct 02 20:02:23 compute-0 sshd-session[272508]: Failed password for root from 193.46.255.99 port 40886 ssh2
Oct 02 20:02:24 compute-0 nova_compute[194781]: 2025-10-02 20:02:24.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:24 compute-0 podman[272512]: 2025-10-02 20:02:24.772492343 +0000 UTC m=+0.126150962 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 20:02:24 compute-0 podman[272511]: 2025-10-02 20:02:24.787241369 +0000 UTC m=+0.145821714 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 20:02:25 compute-0 unix_chkpwd[272549]: password check failed for user (root)
Oct 02 20:02:26 compute-0 nova_compute[194781]: 2025-10-02 20:02:26.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:27 compute-0 sshd-session[272508]: Failed password for root from 193.46.255.99 port 40886 ssh2
Oct 02 20:02:28 compute-0 unix_chkpwd[272550]: password check failed for user (root)
Oct 02 20:02:28 compute-0 podman[272551]: 2025-10-02 20:02:28.753410619 +0000 UTC m=+0.110776129 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:02:28 compute-0 podman[272552]: 2025-10-02 20:02:28.819116607 +0000 UTC m=+0.170025642 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 20:02:29 compute-0 nova_compute[194781]: 2025-10-02 20:02:29.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:29 compute-0 sshd-session[272508]: Failed password for root from 193.46.255.99 port 40886 ssh2
Oct 02 20:02:29 compute-0 podman[209015]: time="2025-10-02T20:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:02:29 compute-0 podman[209015]: @ - - [02/Oct/2025:20:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 20:02:29 compute-0 podman[209015]: @ - - [02/Oct/2025:20:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5235 "" "Go-http-client/1.1"
Oct 02 20:02:30 compute-0 sshd-session[272508]: Received disconnect from 193.46.255.99 port 40886:11:  [preauth]
Oct 02 20:02:30 compute-0 sshd-session[272508]: Disconnected from authenticating user root 193.46.255.99 port 40886 [preauth]
Oct 02 20:02:30 compute-0 sshd-session[272508]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99  user=root
Oct 02 20:02:31 compute-0 openstack_network_exporter[211160]: ERROR   20:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:02:31 compute-0 openstack_network_exporter[211160]: ERROR   20:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:02:31 compute-0 openstack_network_exporter[211160]: ERROR   20:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:02:31 compute-0 openstack_network_exporter[211160]: ERROR   20:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:02:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:02:31 compute-0 openstack_network_exporter[211160]: ERROR   20:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:02:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:02:31 compute-0 nova_compute[194781]: 2025-10-02 20:02:31.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:32 compute-0 podman[272592]: 2025-10-02 20:02:32.733711291 +0000 UTC m=+0.095995682 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:02:34 compute-0 nova_compute[194781]: 2025-10-02 20:02:34.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:36 compute-0 nova_compute[194781]: 2025-10-02 20:02:36.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:39 compute-0 nova_compute[194781]: 2025-10-02 20:02:39.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:41 compute-0 nova_compute[194781]: 2025-10-02 20:02:41.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:43 compute-0 podman[272617]: 2025-10-02 20:02:43.786523149 +0000 UTC m=+0.138629760 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Oct 02 20:02:43 compute-0 podman[272616]: 2025-10-02 20:02:43.793905878 +0000 UTC m=+0.150480523 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:02:44 compute-0 nova_compute[194781]: 2025-10-02 20:02:44.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:46 compute-0 nova_compute[194781]: 2025-10-02 20:02:46.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:46 compute-0 podman[272655]: 2025-10-02 20:02:46.766316554 +0000 UTC m=+0.116641319 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=edpm, version=9.4, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Oct 02 20:02:46 compute-0 podman[272656]: 2025-10-02 20:02:46.781485771 +0000 UTC m=+0.125977477 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 20:02:46 compute-0 podman[272654]: 2025-10-02 20:02:46.790292366 +0000 UTC m=+0.148023550 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, managed_by=edpm_ansible, distribution-scope=public, name=ubi9-minimal, version=9.6, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 02 20:02:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:02:47.517 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:02:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:02:47.518 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:02:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:02:47.519 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:02:48 compute-0 nova_compute[194781]: 2025-10-02 20:02:48.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:48 compute-0 nova_compute[194781]: 2025-10-02 20:02:48.033 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:02:49 compute-0 nova_compute[194781]: 2025-10-02 20:02:49.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:51 compute-0 nova_compute[194781]: 2025-10-02 20:02:51.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:53 compute-0 nova_compute[194781]: 2025-10-02 20:02:53.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:54 compute-0 nova_compute[194781]: 2025-10-02 20:02:54.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:55 compute-0 nova_compute[194781]: 2025-10-02 20:02:55.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:55 compute-0 podman[272714]: 2025-10-02 20:02:55.739692482 +0000 UTC m=+0.103370440 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:02:55 compute-0 podman[272715]: 2025-10-02 20:02:55.747576573 +0000 UTC m=+0.099319947 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 20:02:56 compute-0 nova_compute[194781]: 2025-10-02 20:02:56.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:57 compute-0 nova_compute[194781]: 2025-10-02 20:02:57.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:02:57 compute-0 nova_compute[194781]: 2025-10-02 20:02:57.101 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:02:57 compute-0 nova_compute[194781]: 2025-10-02 20:02:57.101 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:02:57 compute-0 nova_compute[194781]: 2025-10-02 20:02:57.101 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:02:57 compute-0 nova_compute[194781]: 2025-10-02 20:02:57.102 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:02:57 compute-0 nova_compute[194781]: 2025-10-02 20:02:57.481 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:02:57 compute-0 nova_compute[194781]: 2025-10-02 20:02:57.582 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:02:57 compute-0 nova_compute[194781]: 2025-10-02 20:02:57.584 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:02:57 compute-0 nova_compute[194781]: 2025-10-02 20:02:57.646 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:02:57 compute-0 nova_compute[194781]: 2025-10-02 20:02:57.648 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:02:57 compute-0 nova_compute[194781]: 2025-10-02 20:02:57.710 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:02:57 compute-0 nova_compute[194781]: 2025-10-02 20:02:57.712 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:02:57 compute-0 nova_compute[194781]: 2025-10-02 20:02:57.811 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:02:58 compute-0 nova_compute[194781]: 2025-10-02 20:02:58.436 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:02:58 compute-0 nova_compute[194781]: 2025-10-02 20:02:58.438 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5039MB free_disk=72.40175247192383GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:02:58 compute-0 nova_compute[194781]: 2025-10-02 20:02:58.438 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:02:58 compute-0 nova_compute[194781]: 2025-10-02 20:02:58.439 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:02:59 compute-0 nova_compute[194781]: 2025-10-02 20:02:59.190 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:02:59 compute-0 nova_compute[194781]: 2025-10-02 20:02:59.191 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:02:59 compute-0 nova_compute[194781]: 2025-10-02 20:02:59.192 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:02:59 compute-0 nova_compute[194781]: 2025-10-02 20:02:59.266 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:02:59 compute-0 nova_compute[194781]: 2025-10-02 20:02:59.281 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:02:59 compute-0 nova_compute[194781]: 2025-10-02 20:02:59.282 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:02:59 compute-0 nova_compute[194781]: 2025-10-02 20:02:59.283 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:02:59 compute-0 nova_compute[194781]: 2025-10-02 20:02:59.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:02:59 compute-0 podman[209015]: time="2025-10-02T20:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:02:59 compute-0 podman[209015]: @ - - [02/Oct/2025:20:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 20:02:59 compute-0 podman[272769]: 2025-10-02 20:02:59.765400021 +0000 UTC m=+0.127497266 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 20:02:59 compute-0 podman[209015]: @ - - [02/Oct/2025:20:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5226 "" "Go-http-client/1.1"
Oct 02 20:02:59 compute-0 podman[272770]: 2025-10-02 20:02:59.793794076 +0000 UTC m=+0.162280844 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:03:01 compute-0 openstack_network_exporter[211160]: ERROR   20:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:03:01 compute-0 openstack_network_exporter[211160]: ERROR   20:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:03:01 compute-0 openstack_network_exporter[211160]: ERROR   20:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:03:01 compute-0 openstack_network_exporter[211160]: ERROR   20:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:03:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:03:01 compute-0 openstack_network_exporter[211160]: ERROR   20:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:03:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:03:01 compute-0 nova_compute[194781]: 2025-10-02 20:03:01.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:02 compute-0 nova_compute[194781]: 2025-10-02 20:03:02.283 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:02 compute-0 nova_compute[194781]: 2025-10-02 20:03:02.283 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:02 compute-0 nova_compute[194781]: 2025-10-02 20:03:02.283 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:03 compute-0 podman[272811]: 2025-10-02 20:03:03.724462842 +0000 UTC m=+0.089586108 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:03:04 compute-0 nova_compute[194781]: 2025-10-02 20:03:04.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:04 compute-0 nova_compute[194781]: 2025-10-02 20:03:04.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:03:04 compute-0 nova_compute[194781]: 2025-10-02 20:03:04.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:03:04 compute-0 nova_compute[194781]: 2025-10-02 20:03:04.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:05 compute-0 nova_compute[194781]: 2025-10-02 20:03:05.026 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:03:05 compute-0 nova_compute[194781]: 2025-10-02 20:03:05.026 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:03:05 compute-0 nova_compute[194781]: 2025-10-02 20:03:05.027 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:03:05 compute-0 nova_compute[194781]: 2025-10-02 20:03:05.027 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:03:06 compute-0 nova_compute[194781]: 2025-10-02 20:03:06.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:07 compute-0 nova_compute[194781]: 2025-10-02 20:03:07.276 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:03:07 compute-0 nova_compute[194781]: 2025-10-02 20:03:07.298 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:03:07 compute-0 nova_compute[194781]: 2025-10-02 20:03:07.299 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:03:07 compute-0 nova_compute[194781]: 2025-10-02 20:03:07.300 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:09 compute-0 nova_compute[194781]: 2025-10-02 20:03:09.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:10 compute-0 nova_compute[194781]: 2025-10-02 20:03:10.296 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:11 compute-0 nova_compute[194781]: 2025-10-02 20:03:11.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.954 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.955 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.965 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.965 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.966 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.966 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.966 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:12.968 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:03:12.966685) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.001 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 70670000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.003 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.004 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.004 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.006 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.008 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:03:13.004992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:03:13.011062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.020 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.022 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.024 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.027 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.028 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.029 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.031 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.032 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.032 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.033 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.033 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.033 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.034 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:03:13.023888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:03:13.028806) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.036 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:03:13.033770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.037 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.038 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.039 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.039 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.039 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.040 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.040 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:03:13.038324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.040 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.041 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:03:13.041007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.126 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.127 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.127 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.129 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.129 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.130 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:03:13.130039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.131 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.132 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:03:13.132638) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.134 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.135 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.135 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:03:13.135243) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.177 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.178 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.178 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.179 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.180 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.180 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.180 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.181 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.181 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.181 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.182 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:03:13.180989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.183 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.183 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.184 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.184 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.184 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.185 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:03:13.185019) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.186 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.186 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.187 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.189 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:03:13.188646) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.189 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.190 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.190 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.190 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.190 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.191 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:03:13.190967) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.191 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.192 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.193 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.193 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.194 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.194 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.194 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:03:13.194016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.195 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.195 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.196 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.196 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.197 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.197 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.197 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.197 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:03:13.197307) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.198 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.199 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.200 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.200 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.201 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.201 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.202 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.203 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.203 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.204 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.205 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.205 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.206 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.206 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.206 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.207 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.207 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.207 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.207 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.207 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.208 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.208 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.208 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.208 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.209 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.209 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.209 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.210 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.210 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.210 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.211 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.211 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.211 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.212 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.212 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.212 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.212 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.212 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.213 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:03:13.200419) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:03:13.203118) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:03:13.205998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:03:13.207759) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.215 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:03:13.209313) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.215 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:03:13.210859) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.215 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:03:13.212661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.216 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.217 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:03:13.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:03:14 compute-0 nova_compute[194781]: 2025-10-02 20:03:14.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:14 compute-0 podman[272836]: 2025-10-02 20:03:14.732767524 +0000 UTC m=+0.103804461 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 20:03:14 compute-0 podman[272837]: 2025-10-02 20:03:14.743459277 +0000 UTC m=+0.095749556 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 02 20:03:16 compute-0 nova_compute[194781]: 2025-10-02 20:03:16.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:17 compute-0 podman[272875]: 2025-10-02 20:03:17.693979386 +0000 UTC m=+0.070164752 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Oct 02 20:03:17 compute-0 podman[272877]: 2025-10-02 20:03:17.716649395 +0000 UTC m=+0.076934625 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 20:03:17 compute-0 podman[272876]: 2025-10-02 20:03:17.744156538 +0000 UTC m=+0.116730052 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, version=9.4, container_name=kepler, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Oct 02 20:03:19 compute-0 nova_compute[194781]: 2025-10-02 20:03:19.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:21 compute-0 nova_compute[194781]: 2025-10-02 20:03:21.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:23 compute-0 nova_compute[194781]: 2025-10-02 20:03:23.312 2 DEBUG oslo_concurrency.processutils [None req-1beb6f74-be80-461c-834f-0d540fc31d85 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:03:23 compute-0 nova_compute[194781]: 2025-10-02 20:03:23.357 2 DEBUG oslo_concurrency.processutils [None req-1beb6f74-be80-461c-834f-0d540fc31d85 5e0565a40c4e40f9ab77ce190f9527c5 c6bd7784161a4cc3a2e8715feee92228 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:03:24 compute-0 nova_compute[194781]: 2025-10-02 20:03:24.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:26 compute-0 podman[272931]: 2025-10-02 20:03:26.718645121 +0000 UTC m=+0.093783156 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 20:03:26 compute-0 podman[272932]: 2025-10-02 20:03:26.726679396 +0000 UTC m=+0.094320789 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 02 20:03:26 compute-0 nova_compute[194781]: 2025-10-02 20:03:26.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:29 compute-0 nova_compute[194781]: 2025-10-02 20:03:29.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:29 compute-0 podman[209015]: time="2025-10-02T20:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:03:29 compute-0 podman[209015]: @ - - [02/Oct/2025:20:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 20:03:29 compute-0 podman[209015]: @ - - [02/Oct/2025:20:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5227 "" "Go-http-client/1.1"
Oct 02 20:03:30 compute-0 podman[272975]: 2025-10-02 20:03:30.725732714 +0000 UTC m=+0.091414135 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 20:03:30 compute-0 podman[272976]: 2025-10-02 20:03:30.772517268 +0000 UTC m=+0.138298562 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 20:03:31 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:03:31.083 105943 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e6:aa:d7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:ec:ab:4a:b2:29'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 20:03:31 compute-0 nova_compute[194781]: 2025-10-02 20:03:31.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:31 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:03:31.085 105943 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 20:03:31 compute-0 openstack_network_exporter[211160]: ERROR   20:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:03:31 compute-0 openstack_network_exporter[211160]: ERROR   20:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:03:31 compute-0 openstack_network_exporter[211160]: ERROR   20:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:03:31 compute-0 openstack_network_exporter[211160]: ERROR   20:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:03:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:03:31 compute-0 openstack_network_exporter[211160]: ERROR   20:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:03:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:03:31 compute-0 nova_compute[194781]: 2025-10-02 20:03:31.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:34 compute-0 nova_compute[194781]: 2025-10-02 20:03:34.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:34 compute-0 podman[273017]: 2025-10-02 20:03:34.747509014 +0000 UTC m=+0.110809490 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:03:36 compute-0 nova_compute[194781]: 2025-10-02 20:03:36.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:39 compute-0 nova_compute[194781]: 2025-10-02 20:03:39.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:40 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:03:40.088 105943 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbab9e90-4b9d-4a75-81b6-ad2c1de412c6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 20:03:41 compute-0 nova_compute[194781]: 2025-10-02 20:03:41.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:44 compute-0 nova_compute[194781]: 2025-10-02 20:03:44.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:44 compute-0 podman[273040]: 2025-10-02 20:03:44.90593206 +0000 UTC m=+0.120910967 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:03:44 compute-0 podman[273041]: 2025-10-02 20:03:44.933728349 +0000 UTC m=+0.138724711 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 20:03:46 compute-0 nova_compute[194781]: 2025-10-02 20:03:46.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:03:47.519 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:03:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:03:47.520 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:03:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:03:47.521 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:03:48 compute-0 nova_compute[194781]: 2025-10-02 20:03:48.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:48 compute-0 nova_compute[194781]: 2025-10-02 20:03:48.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:03:48 compute-0 podman[273079]: 2025-10-02 20:03:48.749013118 +0000 UTC m=+0.104447738 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:03:48 compute-0 podman[273077]: 2025-10-02 20:03:48.784629647 +0000 UTC m=+0.145959297 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64)
Oct 02 20:03:48 compute-0 podman[273078]: 2025-10-02 20:03:48.787776987 +0000 UTC m=+0.139803930 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., release=1214.1726694543, version=9.4, managed_by=edpm_ansible, container_name=kepler, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, build-date=2024-09-18T21:23:30)
Oct 02 20:03:49 compute-0 nova_compute[194781]: 2025-10-02 20:03:49.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:51 compute-0 nova_compute[194781]: 2025-10-02 20:03:51.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:54 compute-0 nova_compute[194781]: 2025-10-02 20:03:54.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:54 compute-0 nova_compute[194781]: 2025-10-02 20:03:54.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:56 compute-0 nova_compute[194781]: 2025-10-02 20:03:56.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:57 compute-0 nova_compute[194781]: 2025-10-02 20:03:57.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:57 compute-0 podman[273137]: 2025-10-02 20:03:57.739275129 +0000 UTC m=+0.098225469 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:03:57 compute-0 podman[273138]: 2025-10-02 20:03:57.774877588 +0000 UTC m=+0.125093285 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct 02 20:03:59 compute-0 nova_compute[194781]: 2025-10-02 20:03:59.036 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:03:59 compute-0 nova_compute[194781]: 2025-10-02 20:03:59.145 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:03:59 compute-0 nova_compute[194781]: 2025-10-02 20:03:59.146 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:03:59 compute-0 nova_compute[194781]: 2025-10-02 20:03:59.147 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:03:59 compute-0 nova_compute[194781]: 2025-10-02 20:03:59.148 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:03:59 compute-0 nova_compute[194781]: 2025-10-02 20:03:59.353 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:03:59 compute-0 nova_compute[194781]: 2025-10-02 20:03:59.454 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:03:59 compute-0 nova_compute[194781]: 2025-10-02 20:03:59.456 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:03:59 compute-0 nova_compute[194781]: 2025-10-02 20:03:59.558 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:03:59 compute-0 nova_compute[194781]: 2025-10-02 20:03:59.560 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:03:59 compute-0 nova_compute[194781]: 2025-10-02 20:03:59.648 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:03:59 compute-0 nova_compute[194781]: 2025-10-02 20:03:59.651 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:03:59 compute-0 nova_compute[194781]: 2025-10-02 20:03:59.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:03:59 compute-0 podman[209015]: time="2025-10-02T20:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:03:59 compute-0 podman[209015]: @ - - [02/Oct/2025:20:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 20:03:59 compute-0 podman[209015]: @ - - [02/Oct/2025:20:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5232 "" "Go-http-client/1.1"
Oct 02 20:03:59 compute-0 nova_compute[194781]: 2025-10-02 20:03:59.765 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:04:00 compute-0 nova_compute[194781]: 2025-10-02 20:04:00.376 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:04:00 compute-0 nova_compute[194781]: 2025-10-02 20:04:00.378 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5046MB free_disk=72.40175247192383GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:04:00 compute-0 nova_compute[194781]: 2025-10-02 20:04:00.379 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:04:00 compute-0 nova_compute[194781]: 2025-10-02 20:04:00.380 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:04:01 compute-0 openstack_network_exporter[211160]: ERROR   20:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:04:01 compute-0 openstack_network_exporter[211160]: ERROR   20:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:04:01 compute-0 openstack_network_exporter[211160]: ERROR   20:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:04:01 compute-0 openstack_network_exporter[211160]: ERROR   20:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:04:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:04:01 compute-0 openstack_network_exporter[211160]: ERROR   20:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:04:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:04:01 compute-0 nova_compute[194781]: 2025-10-02 20:04:01.580 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:04:01 compute-0 nova_compute[194781]: 2025-10-02 20:04:01.580 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:04:01 compute-0 nova_compute[194781]: 2025-10-02 20:04:01.581 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:04:01 compute-0 nova_compute[194781]: 2025-10-02 20:04:01.700 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:04:01 compute-0 podman[273190]: 2025-10-02 20:04:01.767367089 +0000 UTC m=+0.132879054 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 20:04:01 compute-0 nova_compute[194781]: 2025-10-02 20:04:01.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:01 compute-0 nova_compute[194781]: 2025-10-02 20:04:01.809 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:04:01 compute-0 nova_compute[194781]: 2025-10-02 20:04:01.811 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:04:01 compute-0 nova_compute[194781]: 2025-10-02 20:04:01.811 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:04:01 compute-0 podman[273191]: 2025-10-02 20:04:01.83281327 +0000 UTC m=+0.184376138 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 02 20:04:03 compute-0 nova_compute[194781]: 2025-10-02 20:04:03.809 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:03 compute-0 nova_compute[194781]: 2025-10-02 20:04:03.810 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:03 compute-0 nova_compute[194781]: 2025-10-02 20:04:03.811 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:04 compute-0 nova_compute[194781]: 2025-10-02 20:04:04.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:05 compute-0 podman[273232]: 2025-10-02 20:04:05.751331512 +0000 UTC m=+0.108030849 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:04:06 compute-0 nova_compute[194781]: 2025-10-02 20:04:06.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:06 compute-0 nova_compute[194781]: 2025-10-02 20:04:06.037 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:04:06 compute-0 nova_compute[194781]: 2025-10-02 20:04:06.038 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:04:06 compute-0 nova_compute[194781]: 2025-10-02 20:04:06.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:06 compute-0 nova_compute[194781]: 2025-10-02 20:04:06.895 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:04:06 compute-0 nova_compute[194781]: 2025-10-02 20:04:06.896 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:04:06 compute-0 nova_compute[194781]: 2025-10-02 20:04:06.896 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:04:06 compute-0 nova_compute[194781]: 2025-10-02 20:04:06.897 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:04:09 compute-0 nova_compute[194781]: 2025-10-02 20:04:09.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:10 compute-0 nova_compute[194781]: 2025-10-02 20:04:10.459 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:04:10 compute-0 nova_compute[194781]: 2025-10-02 20:04:10.490 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:04:10 compute-0 nova_compute[194781]: 2025-10-02 20:04:10.491 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:04:10 compute-0 nova_compute[194781]: 2025-10-02 20:04:10.492 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:11 compute-0 nova_compute[194781]: 2025-10-02 20:04:11.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:14 compute-0 nova_compute[194781]: 2025-10-02 20:04:14.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:15 compute-0 podman[273257]: 2025-10-02 20:04:15.783746519 +0000 UTC m=+0.130821991 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm)
Oct 02 20:04:15 compute-0 podman[273256]: 2025-10-02 20:04:15.818357072 +0000 UTC m=+0.169340674 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Oct 02 20:04:16 compute-0 nova_compute[194781]: 2025-10-02 20:04:16.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:19 compute-0 nova_compute[194781]: 2025-10-02 20:04:19.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:19 compute-0 podman[273297]: 2025-10-02 20:04:19.777832682 +0000 UTC m=+0.138191819 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.expose-services=, vcs-type=git)
Oct 02 20:04:19 compute-0 podman[273299]: 2025-10-02 20:04:19.792850656 +0000 UTC m=+0.141861563 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 20:04:19 compute-0 podman[273298]: 2025-10-02 20:04:19.818435259 +0000 UTC m=+0.174419794 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, version=9.4, architecture=x86_64, container_name=kepler, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30)
Oct 02 20:04:21 compute-0 nova_compute[194781]: 2025-10-02 20:04:21.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:24 compute-0 nova_compute[194781]: 2025-10-02 20:04:24.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:26 compute-0 nova_compute[194781]: 2025-10-02 20:04:26.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:28 compute-0 podman[273356]: 2025-10-02 20:04:28.744110609 +0000 UTC m=+0.114291389 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:04:28 compute-0 podman[273357]: 2025-10-02 20:04:28.770610875 +0000 UTC m=+0.129721273 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 20:04:29 compute-0 nova_compute[194781]: 2025-10-02 20:04:29.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:29 compute-0 podman[209015]: time="2025-10-02T20:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:04:29 compute-0 podman[209015]: @ - - [02/Oct/2025:20:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 20:04:29 compute-0 podman[209015]: @ - - [02/Oct/2025:20:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5233 "" "Go-http-client/1.1"
Oct 02 20:04:31 compute-0 openstack_network_exporter[211160]: ERROR   20:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:04:31 compute-0 openstack_network_exporter[211160]: ERROR   20:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:04:31 compute-0 openstack_network_exporter[211160]: ERROR   20:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:04:31 compute-0 openstack_network_exporter[211160]: ERROR   20:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:04:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:04:31 compute-0 openstack_network_exporter[211160]: ERROR   20:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:04:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:04:31 compute-0 nova_compute[194781]: 2025-10-02 20:04:31.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:32 compute-0 podman[273397]: 2025-10-02 20:04:32.751599823 +0000 UTC m=+0.111000815 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 20:04:32 compute-0 podman[273398]: 2025-10-02 20:04:32.824861463 +0000 UTC m=+0.180057218 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 20:04:34 compute-0 nova_compute[194781]: 2025-10-02 20:04:34.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:36 compute-0 podman[273437]: 2025-10-02 20:04:36.783047989 +0000 UTC m=+0.140980830 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 02 20:04:36 compute-0 nova_compute[194781]: 2025-10-02 20:04:36.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:39 compute-0 nova_compute[194781]: 2025-10-02 20:04:39.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:41 compute-0 nova_compute[194781]: 2025-10-02 20:04:41.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:44 compute-0 nova_compute[194781]: 2025-10-02 20:04:44.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:46 compute-0 podman[273460]: 2025-10-02 20:04:46.791684027 +0000 UTC m=+0.148633356 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Oct 02 20:04:46 compute-0 podman[273459]: 2025-10-02 20:04:46.792898418 +0000 UTC m=+0.155192114 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 20:04:46 compute-0 nova_compute[194781]: 2025-10-02 20:04:46.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:04:47.521 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:04:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:04:47.522 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:04:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:04:47.522 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:04:49 compute-0 nova_compute[194781]: 2025-10-02 20:04:49.035 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:49 compute-0 nova_compute[194781]: 2025-10-02 20:04:49.035 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:04:49 compute-0 nova_compute[194781]: 2025-10-02 20:04:49.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:50 compute-0 podman[273498]: 2025-10-02 20:04:50.770423328 +0000 UTC m=+0.119269486 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc.)
Oct 02 20:04:50 compute-0 podman[273500]: 2025-10-02 20:04:50.790706506 +0000 UTC m=+0.125911636 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 20:04:50 compute-0 podman[273499]: 2025-10-02 20:04:50.800768333 +0000 UTC m=+0.142705644 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=base rhel9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, version=9.4, config_id=edpm, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc.)
Oct 02 20:04:51 compute-0 nova_compute[194781]: 2025-10-02 20:04:51.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:54 compute-0 nova_compute[194781]: 2025-10-02 20:04:54.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:55 compute-0 nova_compute[194781]: 2025-10-02 20:04:55.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:56 compute-0 nova_compute[194781]: 2025-10-02 20:04:56.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:57 compute-0 nova_compute[194781]: 2025-10-02 20:04:57.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:04:59 compute-0 podman[209015]: time="2025-10-02T20:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:04:59 compute-0 nova_compute[194781]: 2025-10-02 20:04:59.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:04:59 compute-0 podman[209015]: @ - - [02/Oct/2025:20:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 20:04:59 compute-0 podman[273553]: 2025-10-02 20:04:59.761456062 +0000 UTC m=+0.118319542 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 20:04:59 compute-0 podman[209015]: @ - - [02/Oct/2025:20:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5229 "" "Go-http-client/1.1"
Oct 02 20:04:59 compute-0 podman[273554]: 2025-10-02 20:04:59.776999229 +0000 UTC m=+0.128637525 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 20:05:00 compute-0 nova_compute[194781]: 2025-10-02 20:05:00.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:00 compute-0 rsyslogd[243731]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 20:05:00 compute-0 rsyslogd[243731]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 20:05:00 compute-0 nova_compute[194781]: 2025-10-02 20:05:00.067 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:00 compute-0 nova_compute[194781]: 2025-10-02 20:05:00.068 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:00 compute-0 nova_compute[194781]: 2025-10-02 20:05:00.068 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:00 compute-0 nova_compute[194781]: 2025-10-02 20:05:00.068 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:05:00 compute-0 nova_compute[194781]: 2025-10-02 20:05:00.192 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:00 compute-0 nova_compute[194781]: 2025-10-02 20:05:00.288 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:00 compute-0 nova_compute[194781]: 2025-10-02 20:05:00.290 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:00 compute-0 nova_compute[194781]: 2025-10-02 20:05:00.385 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:00 compute-0 nova_compute[194781]: 2025-10-02 20:05:00.387 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:00 compute-0 nova_compute[194781]: 2025-10-02 20:05:00.450 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:00 compute-0 nova_compute[194781]: 2025-10-02 20:05:00.452 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:05:00 compute-0 nova_compute[194781]: 2025-10-02 20:05:00.520 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:05:01 compute-0 nova_compute[194781]: 2025-10-02 20:05:01.036 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:05:01 compute-0 nova_compute[194781]: 2025-10-02 20:05:01.038 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5046MB free_disk=72.40179061889648GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:05:01 compute-0 nova_compute[194781]: 2025-10-02 20:05:01.039 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:01 compute-0 nova_compute[194781]: 2025-10-02 20:05:01.040 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:01 compute-0 nova_compute[194781]: 2025-10-02 20:05:01.138 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:05:01 compute-0 nova_compute[194781]: 2025-10-02 20:05:01.139 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:05:01 compute-0 nova_compute[194781]: 2025-10-02 20:05:01.139 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:05:01 compute-0 nova_compute[194781]: 2025-10-02 20:05:01.192 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:05:01 compute-0 nova_compute[194781]: 2025-10-02 20:05:01.210 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:05:01 compute-0 nova_compute[194781]: 2025-10-02 20:05:01.212 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:05:01 compute-0 nova_compute[194781]: 2025-10-02 20:05:01.212 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:01 compute-0 anacron[98887]: Job `cron.weekly' started
Oct 02 20:05:01 compute-0 anacron[98887]: Job `cron.weekly' terminated
Oct 02 20:05:01 compute-0 openstack_network_exporter[211160]: ERROR   20:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:05:01 compute-0 openstack_network_exporter[211160]: ERROR   20:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:05:01 compute-0 openstack_network_exporter[211160]: ERROR   20:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:05:01 compute-0 openstack_network_exporter[211160]: ERROR   20:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:05:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:05:01 compute-0 openstack_network_exporter[211160]: ERROR   20:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:05:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:05:01 compute-0 nova_compute[194781]: 2025-10-02 20:05:01.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:02 compute-0 nova_compute[194781]: 2025-10-02 20:05:02.212 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:02 compute-0 nova_compute[194781]: 2025-10-02 20:05:02.214 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:03 compute-0 nova_compute[194781]: 2025-10-02 20:05:03.031 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:03 compute-0 podman[273611]: 2025-10-02 20:05:03.761694812 +0000 UTC m=+0.114832063 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 20:05:03 compute-0 podman[273612]: 2025-10-02 20:05:03.831406212 +0000 UTC m=+0.178166990 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 20:05:04 compute-0 nova_compute[194781]: 2025-10-02 20:05:04.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:06 compute-0 nova_compute[194781]: 2025-10-02 20:05:06.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:06 compute-0 nova_compute[194781]: 2025-10-02 20:05:06.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:05:06 compute-0 nova_compute[194781]: 2025-10-02 20:05:06.036 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:05:06 compute-0 nova_compute[194781]: 2025-10-02 20:05:06.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:06 compute-0 nova_compute[194781]: 2025-10-02 20:05:06.930 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:05:06 compute-0 nova_compute[194781]: 2025-10-02 20:05:06.931 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:05:06 compute-0 nova_compute[194781]: 2025-10-02 20:05:06.931 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:05:06 compute-0 nova_compute[194781]: 2025-10-02 20:05:06.932 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:05:07 compute-0 podman[273653]: 2025-10-02 20:05:07.762056196 +0000 UTC m=+0.128681507 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 02 20:05:09 compute-0 nova_compute[194781]: 2025-10-02 20:05:09.551 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:05:09 compute-0 nova_compute[194781]: 2025-10-02 20:05:09.588 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:05:09 compute-0 nova_compute[194781]: 2025-10-02 20:05:09.588 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:05:09 compute-0 nova_compute[194781]: 2025-10-02 20:05:09.590 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:09 compute-0 nova_compute[194781]: 2025-10-02 20:05:09.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:11 compute-0 nova_compute[194781]: 2025-10-02 20:05:11.585 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:11 compute-0 nova_compute[194781]: 2025-10-02 20:05:11.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.954 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.955 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdba41fa2d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.963 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.963 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.963 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdb9fc6d430>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.965 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7aab78e5-2ff6-460d-87d6-f4c21f2d4403', 'name': 'test_0', 'flavor': {'id': '9b897399-e7fe-4a3e-9cc1-c1f819a27557', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2c6780ee-8ca6-4dab-831c-c89907768547'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c6bd7784161a4cc3a2e8715feee92228', 'user_id': '5e0565a40c4e40f9ab77ce190f9527c5', 'hostId': '536658cc3b3b3040a5dd53f51fb24cd95b743344cf2b37c945bb87a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.966 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.966 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.967 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba7e79010>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.967 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:12 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:12.969 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-02T20:05:12.967564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.010 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/cpu volume: 72700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdba41f8830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.012 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.015 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-02T20:05:13.014849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.016 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdba41f9910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.019 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-02T20:05:13.020944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.029 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdba41f8890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.032 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.032 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.033 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f88c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.034 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-02T20:05:13.033876) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.035 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.036 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdba41f90d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.037 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.039 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9100>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.040 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-02T20:05:13.039276) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.040 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdba41f9bb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.041 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.042 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-02T20:05:13.042143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdba41f9a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.044 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f99a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.044 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdba41f9c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.045 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-02T20:05:13.044543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdba41f8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.047 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba80f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-02T20:05:13.047627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.140 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.141 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.141 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.142 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdba41f88f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.143 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.143 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.143 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.143 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.144 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.144 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.145 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdba41f9790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.145 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.145 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.145 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.146 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.146 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdba41fa360>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.147 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.147 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.147 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.147 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.148 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-02T20:05:13.143801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-02T20:05:13.146011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-02T20:05:13.147913) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.191 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.192 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.193 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdba41f8350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.194 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.194 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.195 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 1202680333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-02T20:05:13.195420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.196 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 116800005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.197 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.latency volume: 93005923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdba41f8080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.198 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdba41f83b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.198 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.198 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f83e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.199 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.200 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.200 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdba41f9be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.201 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f9c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-02T20:05:13.199418) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.203 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.203 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.204 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdba41f8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.204 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.204 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.204 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.204 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.205 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-02T20:05:13.202767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-02T20:05:13.204858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.205 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.206 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.206 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.207 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdba41f8470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.207 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.207 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.207 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f84a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.207 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.208 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.208 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.209 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-02T20:05:13.207879) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdba41f84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.210 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.211 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.211 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.211 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 3458690909 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-02T20:05:13.211151) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.212 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 11977832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.212 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.213 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.213 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdba41fa300>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.213 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.213 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.213 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41fad20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.214 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.214 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 21831680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.214 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.215 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.215 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.216 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdba41f8530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.216 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.216 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.216 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.216 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.217 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.217 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.218 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.218 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.219 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdba41f8b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.219 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.219 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.219 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.219 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.220 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.220 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.220 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdba41f8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.221 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.221 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.221 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-02T20:05:13.214030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.221 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.221 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-02T20:05:13.216924) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.222 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-02T20:05:13.219855) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.222 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.222 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdba41f85f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.222 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.222 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.223 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.223 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.223 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdba41f8e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.223 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.223 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.223 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f8e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.223 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.223 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.224 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-02T20:05:13.221938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.224 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.224 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdba41f81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdba6a51ee0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.224 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.224 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.224 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdba41f97c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.225 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.225 14 DEBUG ceilometer.compute.pollsters [-] 7aab78e5-2ff6-460d-87d6-f4c21f2d4403/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.225 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-02T20:05:13.223078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-02T20:05:13.223882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-02T20:05:13.225054) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:13 compute-0 ceilometer_agent_compute[205529]: 2025-10-02 20:05:13.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct 02 20:05:14 compute-0 nova_compute[194781]: 2025-10-02 20:05:14.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:16 compute-0 nova_compute[194781]: 2025-10-02 20:05:16.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:17 compute-0 podman[273678]: 2025-10-02 20:05:17.772538326 +0000 UTC m=+0.133191632 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:05:17 compute-0 podman[273679]: 2025-10-02 20:05:17.782684345 +0000 UTC m=+0.139238696 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:05:19 compute-0 nova_compute[194781]: 2025-10-02 20:05:19.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:21 compute-0 podman[273716]: 2025-10-02 20:05:21.774719484 +0000 UTC m=+0.125934487 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.29.0, container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vendor=Red Hat, Inc.)
Oct 02 20:05:21 compute-0 podman[273717]: 2025-10-02 20:05:21.786532835 +0000 UTC m=+0.126494730 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 20:05:21 compute-0 podman[273715]: 2025-10-02 20:05:21.80238165 +0000 UTC m=+0.163633139 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Oct 02 20:05:21 compute-0 nova_compute[194781]: 2025-10-02 20:05:21.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:24 compute-0 nova_compute[194781]: 2025-10-02 20:05:24.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:26 compute-0 nova_compute[194781]: 2025-10-02 20:05:26.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:29 compute-0 podman[209015]: time="2025-10-02T20:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:05:29 compute-0 podman[209015]: @ - - [02/Oct/2025:20:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 20:05:29 compute-0 nova_compute[194781]: 2025-10-02 20:05:29.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:29 compute-0 podman[209015]: @ - - [02/Oct/2025:20:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5232 "" "Go-http-client/1.1"
Oct 02 20:05:30 compute-0 podman[273775]: 2025-10-02 20:05:30.733846868 +0000 UTC m=+0.094780851 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 02 20:05:30 compute-0 podman[273776]: 2025-10-02 20:05:30.763256959 +0000 UTC m=+0.127429804 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=iscsid)
Oct 02 20:05:31 compute-0 openstack_network_exporter[211160]: ERROR   20:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:05:31 compute-0 openstack_network_exporter[211160]: ERROR   20:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:05:31 compute-0 openstack_network_exporter[211160]: ERROR   20:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:05:31 compute-0 openstack_network_exporter[211160]: ERROR   20:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:05:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:05:31 compute-0 openstack_network_exporter[211160]: ERROR   20:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:05:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:05:31 compute-0 nova_compute[194781]: 2025-10-02 20:05:31.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:34 compute-0 podman[273815]: 2025-10-02 20:05:34.747348527 +0000 UTC m=+0.121511773 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 20:05:34 compute-0 nova_compute[194781]: 2025-10-02 20:05:34.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:34 compute-0 podman[273816]: 2025-10-02 20:05:34.818506124 +0000 UTC m=+0.173931282 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 20:05:36 compute-0 nova_compute[194781]: 2025-10-02 20:05:36.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:38 compute-0 podman[273855]: 2025-10-02 20:05:38.774938665 +0000 UTC m=+0.134479315 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 02 20:05:39 compute-0 nova_compute[194781]: 2025-10-02 20:05:39.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:41 compute-0 nova_compute[194781]: 2025-10-02 20:05:41.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:44 compute-0 nova_compute[194781]: 2025-10-02 20:05:44.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:46 compute-0 nova_compute[194781]: 2025-10-02 20:05:46.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:05:47.522 105943 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:05:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:05:47.523 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:05:47 compute-0 ovn_metadata_agent[105919]: 2025-10-02 20:05:47.524 105943 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:05:48 compute-0 podman[273879]: 2025-10-02 20:05:48.755269296 +0000 UTC m=+0.117354607 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 20:05:48 compute-0 podman[273880]: 2025-10-02 20:05:48.798290534 +0000 UTC m=+0.146618994 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Oct 02 20:05:49 compute-0 nova_compute[194781]: 2025-10-02 20:05:49.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:49 compute-0 nova_compute[194781]: 2025-10-02 20:05:49.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 20:05:49 compute-0 nova_compute[194781]: 2025-10-02 20:05:49.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:51 compute-0 nova_compute[194781]: 2025-10-02 20:05:51.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:52 compute-0 podman[273921]: 2025-10-02 20:05:52.779257523 +0000 UTC m=+0.136245630 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=edpm, vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, build-date=2024-09-18T21:23:30, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.tags=base rhel9, name=ubi9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct 02 20:05:52 compute-0 podman[273922]: 2025-10-02 20:05:52.785167854 +0000 UTC m=+0.133774487 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 20:05:52 compute-0 podman[273920]: 2025-10-02 20:05:52.787491053 +0000 UTC m=+0.147941178 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Oct 02 20:05:54 compute-0 nova_compute[194781]: 2025-10-02 20:05:54.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:55 compute-0 nova_compute[194781]: 2025-10-02 20:05:55.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:56 compute-0 nova_compute[194781]: 2025-10-02 20:05:56.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:05:57 compute-0 nova_compute[194781]: 2025-10-02 20:05:57.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:05:59 compute-0 podman[209015]: time="2025-10-02T20:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:05:59 compute-0 podman[209015]: @ - - [02/Oct/2025:20:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 20:05:59 compute-0 podman[209015]: @ - - [02/Oct/2025:20:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5232 "" "Go-http-client/1.1"
Oct 02 20:05:59 compute-0 nova_compute[194781]: 2025-10-02 20:05:59.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:00 compute-0 nova_compute[194781]: 2025-10-02 20:06:00.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:00 compute-0 nova_compute[194781]: 2025-10-02 20:06:00.071 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:00 compute-0 nova_compute[194781]: 2025-10-02 20:06:00.071 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:00 compute-0 nova_compute[194781]: 2025-10-02 20:06:00.072 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:00 compute-0 nova_compute[194781]: 2025-10-02 20:06:00.073 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 20:06:00 compute-0 nova_compute[194781]: 2025-10-02 20:06:00.165 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:00 compute-0 nova_compute[194781]: 2025-10-02 20:06:00.274 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:00 compute-0 nova_compute[194781]: 2025-10-02 20:06:00.276 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:00 compute-0 nova_compute[194781]: 2025-10-02 20:06:00.377 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:00 compute-0 nova_compute[194781]: 2025-10-02 20:06:00.379 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:00 compute-0 nova_compute[194781]: 2025-10-02 20:06:00.462 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:00 compute-0 nova_compute[194781]: 2025-10-02 20:06:00.464 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 20:06:00 compute-0 nova_compute[194781]: 2025-10-02 20:06:00.564 2 DEBUG oslo_concurrency.processutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7aab78e5-2ff6-460d-87d6-f4c21f2d4403/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.062 2 WARNING nova.virt.libvirt.driver [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.063 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5043MB free_disk=72.40178680419922GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.064 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.065 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.262 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Instance 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.263 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.263 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.360 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing inventories for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 20:06:01 compute-0 openstack_network_exporter[211160]: ERROR   20:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:06:01 compute-0 openstack_network_exporter[211160]: ERROR   20:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:06:01 compute-0 openstack_network_exporter[211160]: ERROR   20:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:06:01 compute-0 openstack_network_exporter[211160]: ERROR   20:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:06:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:06:01 compute-0 openstack_network_exporter[211160]: ERROR   20:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:06:01 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.485 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating ProviderTree inventory for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.486 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Updating inventory in ProviderTree for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.502 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing aggregate associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.538 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Refreshing trait associations for resource provider 828c5fec-9680-4b70-a7ce-11a1217a9c75, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NODE,HW_CPU_X86_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.620 2 DEBUG nova.compute.provider_tree [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed in ProviderTree for provider: 828c5fec-9680-4b70-a7ce-11a1217a9c75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.638 2 DEBUG nova.scheduler.client.report [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Inventory has not changed for provider 828c5fec-9680-4b70-a7ce-11a1217a9c75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.641 2 DEBUG nova.compute.resource_tracker [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.641 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 20:06:01 compute-0 podman[273988]: 2025-10-02 20:06:01.738984591 +0000 UTC m=+0.100557339 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 02 20:06:01 compute-0 podman[273989]: 2025-10-02 20:06:01.776418366 +0000 UTC m=+0.133832557 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 20:06:01 compute-0 nova_compute[194781]: 2025-10-02 20:06:01.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:02 compute-0 nova_compute[194781]: 2025-10-02 20:06:02.643 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:03 compute-0 nova_compute[194781]: 2025-10-02 20:06:03.030 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:04 compute-0 nova_compute[194781]: 2025-10-02 20:06:04.033 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:04 compute-0 nova_compute[194781]: 2025-10-02 20:06:04.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:05 compute-0 podman[274030]: 2025-10-02 20:06:05.74823209 +0000 UTC m=+0.118367613 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 20:06:05 compute-0 podman[274031]: 2025-10-02 20:06:05.845167386 +0000 UTC m=+0.208351871 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 20:06:06 compute-0 nova_compute[194781]: 2025-10-02 20:06:06.036 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:06 compute-0 nova_compute[194781]: 2025-10-02 20:06:06.037 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 20:06:06 compute-0 nova_compute[194781]: 2025-10-02 20:06:06.048 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 20:06:06 compute-0 nova_compute[194781]: 2025-10-02 20:06:06.859 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquiring lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 20:06:06 compute-0 nova_compute[194781]: 2025-10-02 20:06:06.860 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Acquired lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 20:06:06 compute-0 nova_compute[194781]: 2025-10-02 20:06:06.861 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 20:06:06 compute-0 nova_compute[194781]: 2025-10-02 20:06:06.861 2 DEBUG nova.objects.instance [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aab78e5-2ff6-460d-87d6-f4c21f2d4403 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 20:06:06 compute-0 nova_compute[194781]: 2025-10-02 20:06:06.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:09 compute-0 podman[274073]: 2025-10-02 20:06:09.762500958 +0000 UTC m=+0.128537123 container health_status 723ebd94f64ae14fd2ecbd992e4770a11d0debf2957805df1e239d77cda15b62 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 02 20:06:09 compute-0 nova_compute[194781]: 2025-10-02 20:06:09.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:10 compute-0 nova_compute[194781]: 2025-10-02 20:06:10.238 2 DEBUG nova.network.neutron [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updating instance_info_cache with network_info: [{"id": "db098052-6623-4e4a-9fb7-65b4006efb6f", "address": "fa:16:3e:85:88:9d", "network": {"id": "b5760fda-9195-4e68-8506-4362bf1edf4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.201", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6bd7784161a4cc3a2e8715feee92228", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb098052-66", "ovs_interfaceid": "db098052-6623-4e4a-9fb7-65b4006efb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 20:06:10 compute-0 nova_compute[194781]: 2025-10-02 20:06:10.262 2 DEBUG oslo_concurrency.lockutils [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Releasing lock "refresh_cache-7aab78e5-2ff6-460d-87d6-f4c21f2d4403" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 20:06:10 compute-0 nova_compute[194781]: 2025-10-02 20:06:10.263 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] [instance: 7aab78e5-2ff6-460d-87d6-f4c21f2d4403] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 20:06:10 compute-0 nova_compute[194781]: 2025-10-02 20:06:10.264 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:11 compute-0 nova_compute[194781]: 2025-10-02 20:06:11.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:13 compute-0 nova_compute[194781]: 2025-10-02 20:06:13.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:13 compute-0 nova_compute[194781]: 2025-10-02 20:06:13.034 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 20:06:14 compute-0 nova_compute[194781]: 2025-10-02 20:06:14.106 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:14 compute-0 nova_compute[194781]: 2025-10-02 20:06:14.107 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 20:06:14 compute-0 nova_compute[194781]: 2025-10-02 20:06:14.126 2 DEBUG nova.compute.manager [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 20:06:14 compute-0 nova_compute[194781]: 2025-10-02 20:06:14.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:16 compute-0 nova_compute[194781]: 2025-10-02 20:06:16.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:17 compute-0 sshd-session[274097]: Accepted publickey for zuul from 192.168.122.10 port 47408 ssh2: ECDSA SHA256:NK1FN69XIC8toEeMYITi4fybcrLDod61Rnc7CB/LNL0
Oct 02 20:06:17 compute-0 systemd-logind[798]: New session 36 of user zuul.
Oct 02 20:06:17 compute-0 systemd[1]: Started Session 36 of User zuul.
Oct 02 20:06:17 compute-0 sshd-session[274097]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 20:06:17 compute-0 sudo[274101]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 02 20:06:17 compute-0 sudo[274101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 20:06:19 compute-0 podman[274205]: 2025-10-02 20:06:19.777836219 +0000 UTC m=+0.128106112 container health_status 1d0802ec8e462f958eb92e121c35ec678f85793c039b759dd69a93403ed49bab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.vendor=CentOS)
Oct 02 20:06:19 compute-0 nova_compute[194781]: 2025-10-02 20:06:19.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:19 compute-0 podman[274206]: 2025-10-02 20:06:19.830991216 +0000 UTC m=+0.178702253 container health_status 29adca77c9edd88782acdad03ab5dc39f279d2753749ec6ee4806cfea3bf254b (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=874b68da40aaccacbf39bda6727f8345, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 20:06:21 compute-0 nova_compute[194781]: 2025-10-02 20:06:21.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:23 compute-0 ovs-vsctl[274318]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 02 20:06:23 compute-0 podman[274356]: 2025-10-02 20:06:23.762280514 +0000 UTC m=+0.117500801 container health_status a17f8c2cf9e26fdf858ad87ddc7b0f0bb7796d0a4b7bdd78c110678f771a91c1 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, version=9.6, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 02 20:06:23 compute-0 podman[274358]: 2025-10-02 20:06:23.766960643 +0000 UTC m=+0.113265953 container health_status d36b5d2f4387ce1101687f1bcb64995dcf43b9faed43b72a537ca666d3e7ee9c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 20:06:23 compute-0 podman[274357]: 2025-10-02 20:06:23.790920385 +0000 UTC m=+0.144014618 container health_status c779d1f81b964ddea020a036b4b72e4257f7a6f2155a3d1a2129cadd320a58a3 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-type=git)
Oct 02 20:06:24 compute-0 nova_compute[194781]: 2025-10-02 20:06:24.034 2 DEBUG oslo_service.periodic_task [None req-f7b37944-f6bf-404b-ac49-fec61a10577f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 20:06:24 compute-0 virtqemud[194432]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 02 20:06:24 compute-0 virtqemud[194432]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 02 20:06:24 compute-0 virtqemud[194432]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 20:06:24 compute-0 nova_compute[194781]: 2025-10-02 20:06:24.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:26 compute-0 crontab[274802]: (root) LIST (root)
Oct 02 20:06:26 compute-0 nova_compute[194781]: 2025-10-02 20:06:26.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:28 compute-0 systemd[1]: Starting Hostname Service...
Oct 02 20:06:28 compute-0 systemd[1]: Started Hostname Service.
Oct 02 20:06:29 compute-0 podman[209015]: time="2025-10-02T20:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 02 20:06:29 compute-0 podman[209015]: @ - - [02/Oct/2025:20:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31981 "" "Go-http-client/1.1"
Oct 02 20:06:29 compute-0 podman[209015]: @ - - [02/Oct/2025:20:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5231 "" "Go-http-client/1.1"
Oct 02 20:06:29 compute-0 nova_compute[194781]: 2025-10-02 20:06:29.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:31 compute-0 openstack_network_exporter[211160]: ERROR   20:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 02 20:06:31 compute-0 openstack_network_exporter[211160]: ERROR   20:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:06:31 compute-0 openstack_network_exporter[211160]: ERROR   20:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 02 20:06:31 compute-0 openstack_network_exporter[211160]: ERROR   20:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 02 20:06:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:06:31 compute-0 openstack_network_exporter[211160]: ERROR   20:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 02 20:06:31 compute-0 openstack_network_exporter[211160]: 
Oct 02 20:06:31 compute-0 nova_compute[194781]: 2025-10-02 20:06:31.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:32 compute-0 podman[275168]: 2025-10-02 20:06:32.773431332 +0000 UTC m=+0.132335909 container health_status e2ec57b7db84d6723a03da64f142b36170d4cc8b99467046473a32861aa1e4fd (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 02 20:06:32 compute-0 podman[275166]: 2025-10-02 20:06:32.776612193 +0000 UTC m=+0.137874641 container health_status 61860448daa5f54398c1c7d2bbbea0796e025d90a15800fd7d3a1892069b3da4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 02 20:06:34 compute-0 nova_compute[194781]: 2025-10-02 20:06:34.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:36 compute-0 podman[275622]: 2025-10-02 20:06:36.386429444 +0000 UTC m=+0.102307143 container health_status 40ce284458b17c0fc7d360665d423406d732dfc3ddcb6c70bdf3f7776e1a354d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 20:06:36 compute-0 podman[275626]: 2025-10-02 20:06:36.454319717 +0000 UTC m=+0.170013212 container health_status d77c65078cb405ea343edbdca03870e8273f9849bddbaf8a987b9ed569505aa2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller)
Oct 02 20:06:36 compute-0 nova_compute[194781]: 2025-10-02 20:06:36.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 20:06:37 compute-0 ovs-appctl[276016]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 20:06:37 compute-0 ovs-appctl[276020]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 20:06:37 compute-0 ovs-appctl[276028]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
